Hacker Newsnew | past | comments | ask | show | jobs | submit | pesenti's commentslogin

Interactive visualization with free access: https://citiesmoving.com/visualizations/


The difference between the US and even its northern neighbor, Canada, seems stark.

That being said, how does Manhattan have a greater percentage of car trips than NYC? Are hundreds of thousands of people taking Taxis and Ubers?


I noticed this as well. I find it hard to believe that car is the most common form of transportation to work in New York. I would assume Public Transit and Walking would dominate.


> Are hundreds of thousands of people taking Taxis and Ubers?

Probably. Not just the tourists, but business folks. I remember people would regularly take a taxi for a business meeting only a couple blocks away. I guess if the company is paying for it, why not.


What will the cost be? When sending back function calls results, what will be the number of tokens? Just the ones corresponding to the results or that plus the full context?


Usually just result tokens plus prompt tokens, there might be a special prompt used here.



Paper: https://www.science.org/doi/10.1126/science.ade9097

Code: https://github.com/facebookresearch/diplomacy_cicero

Site: https://ai.facebook.com/research/cicero/

Expert player vs. Cicero AI: https://www.youtube.com/watch?v=u5192bvUS7k

RFP: https://ai.facebook.com/research/request-for-proposal/toward...

The most interesting anecdote I heard from the team: "during the tournament dozens of human players never even suspected they were playing against a bot even though we played dozens of games online."


"Having read the paper & supplementary materials, watched narrated game & spoken to one of the human players I'm pretty concerned. The @ScienceMagazine paper centres 'human-AI cooperation' & the bot is not supposed to lie. However, videos clearly show deception/manipulation"

"Screenshots of the stab below.

The human player said: "The bot is supposed to never lie [...] I doubt this was the case here" "I was definitely caught more off guard as a result of this message; I knew the bot doesn't lie, so I thought the stab wouldn't happen." "

"I'd like the researchers involved to say quite a bit more about "A.3 Manipulation"

What are possible prevention, detection & mitigation steps?

What are the possible use cases? What are the benefits/downsides of them? Has Meta considered developing products based on this?" -- Haydn Belfield, a Cambridge University researcher who focuses on the security implications of artificial intelligence (AI).

https://twitter.com/HaydnBelfield/status/1595168102924402688

https://www.cser.ac.uk/team/haydn-belfield/


As far as I can tell, as described in the paper, the bot in fact never lies, in this sense: there is a model that generates messages from moves, where messages should correspond to moves, and when the bot says any messages, at the time, they are generated from moves the bot truthfully intends to play.

On the other hand, the bot has no concept whatsoever of keeping its words. After saying words, it is free to change its mind about what moves to play, motivated from, for example, messages from other players.


> [snip] when the bot says any messages, at the time, they are generated from moves the bot truthfully intends to play.

> On the other hand, the bot has no concept whatsoever of keeping its words. After saying words, it is free to change its mind [snip]

Reminds of that one Asimov story about the robot who had a different interpretation of the first law of robotics. If my very hazy memory is right, the idea was that the robot could put a person in danger if it knew that it had the ability to prevent any damage from happening, but once it caused the danger, it could choose not to act and allow the person to come to harm.

I might be remembering this incorrectly, it's been a very long time since I read the story, but that was the first thought that came to mind when reading your comment :).


Amusing, it's described like the "buggers" in Ender's Game!


I don't see anything in the papers that say the bot isn't supposed to lie. Lying and being deceptive is a part of the game.


The paper does describe the bot's architecture which makes the bot incapable of lying in a certain technical sense. See what I wrote elsewhere.


Hmm, I guess facebook doesn't have to go through IRB for human subject experiments, nor does Science require it, apparently.


Do you actually think it would be a good thing if an IRB was required for this type of thing? Sure, it's "human experimentation" but the likelihood for any serious harm is basically zero.

It goes with the zeitgeist to argue for what makes the life of big tech companies hard, but they are big enough that they can afford things like that. It's smaller companies and academics that would end up not being able to innovate as much

Go down that road and you end up with an IRB evaluation requires for an A/B test that changes the color of a button


Agreed. This is using an AI to play a game, IRB seems like overkill. I guess the only potential problem is if it went off the rails and spouting toxic language, but that presumably was not a real possibility.


That's nothing. Someone trained a GPT-J 6B model on 4chan and then let the model loose on the forums for a day. It took about 15k messages until people suspected something was off, and even that was because the country flags, Seychelles, a rare flag on 4chan, were a giveaway.

video: https://www.youtube.com/watch?v=efPrtcLdcdM

model: https://huggingface.co/ykilcher/gpt-4chan


I didn't read the paper. <==

Creating an AI to lie seems like the Wrong Path.

Zuckerberg should shut this down ASAP. If that's the case.


IRB is for the government.


Research paper: https://makeavideo.studio/Make-A-Video.pdf

Examples: https://make-a-video.github.io/

Demo site: https://makeavideo.studio/

I am told live demo and open model are on the way.


Research paper: https://makeavideo.studio/Make-A-Video.pdf

Examples: https://make-a-video.github.io/

Demo site: https://makeavideo.studio/

I am told live demo and open model are on the way.


They almost always say that but never deliver. Won't be out even in a year's time.


I am actually no longer working for Meta. I do think the title reflects the news and is factually correct. But happy for @dang to change it back as it’s not exactly the title of the article


Ideally the submission title would have matched the blog title and then not only would it have been factually correct but it would have also not had any omissions. It seems disingenuous to me to post a submission with that title knowing how unpopular the FB account requirement is in this circle while tacitly omitting the fact that a Meta account is going to be required. It might not have been your intent but it did come across as suspicious and as it is it comes across as a Lie of Omission.


Sure, agreed, I try to always submit the exact title for that reason but I got lazy as I was on my phone.


It does not reflect the news accurately. Very few people here distinguish between "Meta" and "Facebook". An account is still required to use the device, and that account is still owned by Meta/Facebook.

(As an aside, you should probably update your HN profile, it still reads "Now VP of AI at Facebook.")


Do you still hold stock?


I am sure the market move is going to be drastic.


I do.


>I do think the title reflects the news and is factually correct.

This is one of those times where one is only "technically" correct thanks to weasel-y phrasing.


Are all the 200x200 translations going directly or is English (or another language) used as an intermediate for some of them?


All translation directions are direct from language X to language Y, with no intermediary. We evaluate the quality through 40,602 different translation directions using FLORES-200. 2,440 directions contain supervised training data created through our data effort, and the remaining 38,162 are zero-shot.




Also note comments from hello_im_angela (= Angela Fan) and jw4ng (= Jeff Wang). Those are the HN accounts for Angela and Jeff from No Languages left Behind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: