Hacker Newsnew | past | comments | ask | show | jobs | submit | yreg's commentslogin

Is this a lucid dream?

Youtube is full of scam baiting videos – of people who waste scammer's time for entertainment.

A very usual scenario is that the scammer pretends to be a technician doing some remote support and for example pretends to provide some refund. Then they pretend that they've mistakenly sent out e.g. 10x the amount and they ask for the difference back, claiming that their job is on the line.

Crypto would work, but since they target old and tech-illiterate people, the easiest way is usually to ask the victim to go to a store, buy gift cards and read out the codes.

Google kitboga (a known scam baiter) for the videos.


“Do not redeem the…! WHY DID YOU DO THAT!?” lol

They’re great entertainment pieces, and almost a commentary on the state of the world through the lens of microeconomics, with both sides behaving in a way they think is best for them.

For the baiters, they get engagement and, sometimes, the feeling of revenge for a scam visited upon an elderly relative; for the scammer, maybe it’s worse, as we know some people are trafficked into places then forced to scam people (or maybe they just want money). Still, kinda paints the world in a sad light.


I guess the days of the scammer grunts are numbered. It is eventually going to be cheaper and more efficient to use a language model. Only the scammer architects who come up with the schemes will be able to extract value.

When that happens, there won't be much entertainment nor that much ethical value in scam-baiting. We need to enjoy it while we can.


There's no reason to presume that the author 'thinks' that.


Then why did he mention it (their credentials)? It has literally zero relevance in this case. Maybe they were trying to show off?


It's relevant because it shows they are not newbie on the platform and are unlikely to have misbehave in some capacity to warrant a full deactivation. It adds credibility to their story.


“This isn’t just an email address; it is my core digital identity”

If he doesn’t think like that, then why does he act like it?


[flagged]


> That sentence smells like AI writing, so who knows what the author actually thinks.

The author has been a professional writer since long before LLMs were invented: https://hey.paris/books-and-events/books/

LLMs were trained on books like the ones written by the author, which is why AI writing "smells" like professional writing. The reason that AI is notorious for using em dashes, for example, is that professional authors use em dashes, whereas amateur writers tend not to use em dashes.

It's becoming absurd that we're now accusing professional writers of being AI.


I didn't mention em dashes anywhere in my comment!

If this isn't AI writing, why say "The “New Account” Trap" with then further sub-headers "The Legal Catch", "The Technical Trap", "The Developer Risk"... I have done a lot of copyreading in my life and humans simply didn't write this way prior to recent years.


> humans simply didn’t write this way prior to recent years.

Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?


Decades of reading experience of blog posts and newspaper articles. They simply never contained this many section headers or bolded phrases after bullet points, and especially not of the "The [awkward noun phrase]" format heavily favored by LLMs.


So what would explain why AI writes a certain way, when there is no mechanism for it, and when the way AI works is to favor what humans do? LLM training includes many more writing samples than you’ve ever seen. Maybe you have a biased sample, or maybe you’re misremembering? The article’s style is called an outline, we were taught in school to write the way the author did.


Why did LLMs add tons of emoji to everything for a while, and then dial back on it more recently?

The problem is they were trained on everything, yet the common style for a blog post previously differed from the common style of a technical book, which differed from the common style of a throwaway Reddit post, etc.

There's a weird baseline assumption of AI outputting "good" or "professional" style, but this simply isn't the case. Good writing doesn't repeat the same basic phrasing for every section header, and insert tons of unnecessary headers in the first place.


Yes, training data is a plausible answer to your own question there, as well as mine above. And that explanation does not support your claims that AI is writing differently than humans, it only suggests training sets vary.

Repeating your thesis three times in slightly different words was taught in school. Using outline style and headings to make your points clear was taught in school. People have been writing like this for a long time.

If your argument depends on your subjective idea of “good writing”, that may explain why you think AI & blog styles are changing; they are changing. That still doesn’t suggest that LLMs veer from what they see.

All that aside, as other people have mentioned already, whether someone is using AI is irrelevant, and believing you can detect it and accusing people of using AI quickly becoming a lazy trope, and often incorrect to boot.


You’re pointlessly derailing a conversation with a claim you can’t support that isn’t relevant even if true.

Regardless of whether AI wrote that line he published it and we can safely assume it is what he thinks.


[flagged]


I don’t think you even know what you’re arguing about anymore. You claimed that what the author wrote wasn’t what the author thinks. As evidence you provided weak arguments about other parts of it being AI written and made an appeal to your own authority. It doesn’t matter if AI wrote that line, he wrote it, a ghost writer wrote it or a billion monkeys wrote it. He published it as his own work and you can act as if he thinks it even if you don’t otherwise trust him or the article.


Ah, I see the confusion, you're still focusing entirely on this one "this isn't just x; it's y" line. I was mostly talking about the piece as a whole, for pretty much everything other than the first sentence of my first comment above. Sincere apologies if I didn't state that clearly.


LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.


[flagged]


Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.


That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.


[flagged]


> AI may output certain things at a vastly different rate than it appears in the training data

That’s a subjective statement, but generally speaking, not true. If it were, LLMs would produce unintelligible text & images. The way neural networks function is fundamentally to produce data that is statistically similar to the training data. Context, prompts, and training data are what drive the style. Whatever trends you believe you’re seeing in AI can be explained by context, prompts, and training data, and isn’t an inherent part of AI.

Extra fingers are known as hallucination, so if it’s a different phenomenon, then nobody knows what you’re talking about, and you are saying your analogy to fingers doesn’t work. In the case of images, the tokens are pixels, while in the case of LLMs, the tokens are approximately syllables. Finger hallucinations are lack of larger structural understanding, but they statistically mimic the inputs and are not examples of frequency differences.


This is a bad faith argument and you know it.


> I didn't mention em dashes anywhere in my comment!

I know. I just mentioned them as another silly but common reason why people unjustly accuse professional writers of being AI.

> I have done a lot of copyreading in my life and humans simply didn't write this way prior to recent years.

What would you have written instead?


Most of those section headers and bolded bullet-point summary phrases should simply be removed. That's why I described them as superfluous.

In cases where it makes sense to divide an article into sections, the phrasing should be varied so that they aren't mostly of the same format ("The Blahbity Blah", in the case of what AI commonly spews out).

This is fairly basic writing advice!

To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article. For me, it reduces the trustworthiness of the claims in the article, especially combined with the key missing detail of why/how exactly such a large gift card was being purchased.


> To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article.

It's unlikely that the article had the benefit of professional, external editing, unlike the books. Moreover, it's likely that this article was written in a relatively short amount of time, so maybe give the author a break that it's not formatted the way you would prefer if you were copyediting? I think you're just nitpicking here. It's a blog post, not a book.

Look at the last line of the article: "No permission granted to any AI/LLM/ML-powered system (or similar)." The author has also written several previous articles that appear to be anti-AI: https://hey.paris/posts/govai/ https://hey.paris/posts/cba/ https://hey.paris/posts/genai/

So again, I think it's ridiculous to claim that the article was written by AI.


It's a difference of opinion and that's fine. But I'll just say, notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.


> notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.

The post https://hey.paris/posts/cba/ has five bold "And..." headers, which is even worse than "The..." headers.

Would AI do that? The more plausible explanation is that the writer just has a somewhat annoying blogging style, or lack of style.


To me those "And..." headers read as intentional repetition to drive home a point. That isn't bad writing in my opinion. Notice each header varies the syntax/phrasing there. They aren't like "And [adjective] [noun]".

We're clearly not going to agree here, but I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".


> I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".

No, I'm going to try very hard to forget that I ever engaged in this discussion. I think your evidence is minimal at best, your argument self-contradictory at worst. The issue is not even whether you and I agree but whether it's justifiable to make a public accusation of AI authorship. Unless there's an open-and-shut case—which is definitely not the case here—it's best to err on the side of not making such accusations, and I think this approach is recommended by the HN guidelines.

I would also note that your empirical claim is inaccurate. A number of the headers are just "The [noun]". In fact, there's a correspondence between the headers and subheaders, where the subheaders follow the pattern of the main header:

> The Situation • The Trigger • The Consequence • The Damage

> The "New Account" Trap • The Legal Catch • The Technical Trap • The Developer Risk

This correspondence could be considered evidence of intention, a human mind behind the words, perhaps even a clever mind.

By the way, the liberal use of headers and subheaders may feel superfluous to you, but it's reminiscent of textbook writing, which is the author's specialty.


[flagged]


> please don't make it out like a throwaway "AI bad" argument.

The issue isn't whether AI is good or bad or neither or both. The issue is whether the author used AI or not. And you were actually the one who suggested that the author's alleged use of AI made the article less trustworthy. The only reason you mentioned it was to malign the author; you would never say, for example, "The author obviously used a spellchecker, which affects how trustworthy I find the article."

> If you think this is good writing then you're welcome to your opinion

I didn't say it's good writing. To the contrary, I said, "the writer just has a somewhat annoying blogging style, or lack of style."

The debate was never about the author's style but rather about the author's identity, i.e., human or machine.

> Textbooks don't contain section headers every few paragraphs.

Of course they do. I just pulled some off my shelves to look.

Not all textbooks do, but some definitely do.


I said it affects how trustworthy I find the article, when considered in combination with other aspects of this situation that don't add up to me.

After going through my technical bookshelf I can't find a single example that follows this header/bullet style. And meanwhile I have seen countless posts that are known to be AI-assisted which do.

Apparently we exist in different realities, and are never going to agree on this, so there is no point in discussing further.


> Textbooks don’t contain section headers every few paragraphs.

Yes they absolutely do. What are you even talking about?


> I know. I just mentioned them as another silly but common reason why people unjustly accuse professional writers of being AI.

The difference is that using em dashes is good, whereas the cringe headings should die in a fire whether they’re written by an LLM or a human.


Heuristics are nice but must be reviewed when confronted with actual counterexamples.

If this is a published author known to write books before LLMs, why automatically decide "humans don't write like this". He's human and he does write like this!


The author is reputable, just look at the rest of their website.

Your accusation on the other hand is based on far-fetched speculation.


My writing from 5+ years ago was accused of being AI generated by laymen because I used Markdown, emojis and dared to use headers for different sections in my articles.

It's kind of weird realizing you write like generic ChatGPT. I've felt the need to put human errors, less markup, etc into stuff I write now.


> I've felt the need to put human errors, less markup, etc into stuff I write now.

Don't give in to the nitwits!


The author lives in Australia. You get points from supermarket for purchasing some gift cards during some promotion, it's around 10% of the card value.


Gift cards are associated with money laundering and many online scams. I would guess any usage of them (especially in larger denominations) would attract increased attention and additional risk. That's nonsensical of course, why does Apple sell them if they are also suspicious of them, but I would guess if he had paid with a credit card there would have been no issue.

If you receive them as a gift, use them only in a situation unconnected with your cloud ID, such as to pay for new hardware at an Apple store.


> I'm more curious how/why the author ended up with a $500 gift card. That's a large amount, and the author never shares how this was obtained, which seems like a key missing detail. Did the author buy the gift card for himself (why?) or did someone give him a very large gift (why not mention that?)

The author mentions a big store (names it similar to Walmart for US based readers).

I would assume this was an accepted form of "return a product without a receipt" or "we want to accept your complain about this product we sold going crazy 1 day after it's warranty but we cannot give you cash back" etc


I don't understand. Gift cards typically cannot be returned, at least in the US. And the author said the gift card was redeemed "to pay for my 6TB iCloud+ storage plan", which also cannot be returned I'd imagine.


But gift cards aren't supposed to work that, right? If it wasn't "legal" or "okay" to have a 500 dollar card, they shouldn't be sold. They are available, therefore they should be perfectly usable.

I don't want to speculate more, but one of the use cases for them is for people that choose to not use cards online (or even don't have credit cards at all) to be able to buy digital goods with cash.

Either way, if we're questioning buying/using the gift card, we're blaming the victim


I'm not blaming anyone; I just find it surprising that this detail wasn't mentioned or explained. Its omission makes the article less trustworthy to me.

People are fast to pull out pitchforks in response to outrage-bait posts like this, but (generally speaking) a nontrivial percentage of such posts are intentionally omitting details which can help explain the other side's actions.

Also I genuinely wasn't familiar with this specific use-case for gift cards. At least in the US, you can buy general-purpose prepaid debit cards for this type of thing instead, or use various services which generate virtual cards e.g. privacy.com. To me that seems infinitely more normal than buying a large-value "gift card" for yourself, but I'm admittedly not familiar with the options in other countries.


1. The prepaid Visa or Mastercard come with an extra fee (like 5-6 dollars per card if I recall correctly?)

2. I didn't see the prepaid cards in stores outside the US, so they are probably not that popular outside.

Sometimes you also want to shift your spending, like if you spend 500 USD this month at this store, you'll get some good % cashback. So you end up buying a gift card that you know you'll definitely use next month.

I think this is irrelevant, TBH.


privacy.com even if it was available in some country just means you give transactions of your identity to some other company. Cash (and so gift cards if they don't accept cash) is the most private way.


Re: AI writing: AI tends to (and might be getting better) use commas for such claims, in the form of, “it’s not just X, but [optional:also] Y”.

Even if it feels sus, remember that AI is trained on what it sees: even the posts here will make it more and more effective at “writing like a human”.

As for the OP, the claims to exist and have published books, etc. are relatively easily publicly verifiable.

No, $500 isn’t a large amount, doubly so anymore. I consistently have to try to re-anchor, but $100 is the new $20 (sadly).


AI used em-dashes initially in that type of sentence structure, but more recently moved to a mix of semicolons and commas, at least from what I've been seeing.

I never claimed the author doesn't exist.

$500 is objectively a large amount for a gift card. Off-the-shelf gift cards with predetermined amounts are almost always substantially less than this.


Did you even read the article? "The only recent activity on my account was a recent attempt to redeem a $500 Apple Gift Card to pay for my 6TB iCloud+ storage plan" a 6TB plan is $29.99 monthly.. It's not farfetched to assume he purchased a $500 gift card so he could keep the subscription without worrying about it!

"The card was purchased from a major brick-and-mortar retailer (Australians, think Woolworths scale; Americans, think Walmart scale)" There's not much of a reason to assume someone else unaffiliated with the author bought this card, he mentions talking to the vendor and getting a replacement which means he has the receipt


Yes, I read the article and it simply does not directly address who purchased the card.

It certainly implies the author bought the card for himself, yes; but that seems rather unusual to me, especially in such a high amount.

Why would you purchase a $500 gift card for yourself to "keep a subscription without worrying about it" as opposed to just paying the small monthly amount? Honest question, I literally don't understand that motivation at all. In my mind a gift card is more problematic than a normal credit card in this scenario since it eventually runs out.

Second question: why did you create an HN account just to write this comment?


> Why would you purchase a $500 gift card for yourself to "keep a subscription without worrying about it" as opposed to just paying the small monthly amount? Honest question, I literally don't understand that motivation at all. In my mind a gift card is more problematic than a normal credit card in this scenario since it eventually runs out.

Asides from the promotional bonuses that other users have mentioned, if you have an Apple Family Sharing group you can only use a single credit card tied to the main account for any payments to Apple, but individual accounts will draw down from their Apple Account balance before using that credit card - so gift cards let individuals pay for their own Apple things (subscriptions or otherwise).


I wonder if you can prepay using a card ? But otherwise to answer your potential question, I understand OP as I like to prepay things like my phone operator. I put 500 USD there, and come back one year later. This way it can free-up my limit of 10 virtual cards I have, and most of all, can keep their limits as close as possible to the minimum. If you have a mix of services on the same card it is much more difficult and more risky. If you have 100 USD + 50 USD + 25 USD + 75 USD + 60 USD in monthly spend. Then you have 310 USD at risk, when your risk could be way lower.


there are people who don't like to spread credit card numbers/their identity around.

there is a number of services that I pay for with either their gift cards or generic gift cards


Did you read the comment you're responding to? Where in the article does it explain why an adult is buying a $500 gift card to pay their apple subscription instead of just paying for it directly?


“Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that". ” --https://news.ycombinator.com/newsguidelines.html


There is if you want to blame the victim and/or work for Apple.


I understand that Disney might care about this, but I don't see why they should.

What exactly does “fanart” (no matter how distasteful and controversial) change?

Let people generate whatever fictional character they want.


It only works until Mickey Mouse shows up on your Tiktok feed lynching an African American and doing a sieg heil salute. Are you sure Disney wants that or would not care about that??


There are clearly plenty of people who feel the same way as you, and for those people what I’m about to ask might have such an obvious answer that it could feel like I’m being rhetorical or feigning ignorance to provoke an emotional response, but it’s truly honest:

Why should Disney care?

To which you might say “because people care”, so:

Why should people care?

Back when I was a spud I used futuristic text-to-speech synthesis to make my computer say “Eye am Bill Gaytes my farts go FERT FERT FERT” - Should Bill Gates be offended? What about the people who like him? What about the Intel processor I used to create it? Or the company behind the TTS software? Would anyone think they’re involved and endorsed it? I guess the real question is: are we catering the world to people who can’t make that distinction?


People care because it's entirely unconscious. Even if you choose not to care, you can't, because you've already seen it.

The way advertisement works is that it's brain hacking - it's just associations. Over time your brain associates a brand with a product or products, and then simply by having this association in your brain you're more likely to buy the product.

This also works for negative advertisement.

Think about it. Suppose you did see mickey mouse saluting Hitler, or maybe you saw mickey mouse stick a jar up his little rat ass.

When you see mickey mouse, undoubtedly, even if just for a second, your mind will think about what you saw before. You might discount it immediately, but the damage is done. You still feel that emotion, even if only for a split second, and you have been influenced by it.


Suppose you did see mickey mouse saluting Hitler

Appropriate example, more than some may realize. Walt Disney and Adolf Hitler were good friends. Walt would send him a real of cartoons every month. Adolf and his senior leaders would watch it at the resort in the Alps. One may still be able to find the silent films that one of his mistresses filmed showing them watching cartoons on the projector with his senior leaders. Adolf was big into art and appreciated the work Disney created. There was a project about 15 or so years ago to use computing power to figure out what Adolf was saying based on his muscle movements since she was filming from behind him at an angle. I can't remember what the project was called. I saw it on a TV program.


The advertising mechanism you outlined proves too much. Every parody, every satirical cartoon, every unflattering depiction creates profit risking associations? The idea that negative association might originate from AI generated Mickey doing something vile does not seem categorically distinct from the hand drawn rule 34 that has existed for decades, or from South Park episodes, or from bathroom stall drawings. Memories, sure, but I’m not sure the details of the studies support the idea that seeing childish or satirical works that are obviously not created or supported by the IP holder will have that kind of negative cognitive association. Actual acts done by the company, or willing associations with unsavory acts - absolutely. But there’s a wide distinction between taking a cartoon episode out of syndication because Epstein was a guest character voice, and fretting over a 3rd grader typing “Daffy but with boobs and stuff” into photoshop-o-matic.com. The question is whether fleeting cognitive residue constitutes actionable damage or simply the background noise of living among other minds who create things.

Kids making their computers say profane things about famous people or even making crude jokes at the expense of the disabled themselves created “negative associations” with the technology, and potentially with the companies producing it (if the effect is somehow unaffected by context), but the developers did not restrict access and blind people gained a tool that fundamentally altered their ability to navigate the world.

Now? Parents of a terminally ill child who cannot afford a trip could place their daughter in a photo with Elsa. Therapists working with autistic children who connect only with specific Disney characters could generate personalized social stories and visual supports. Teachers in underfunded schools could create engaging materials without licensing fees. Placing a real person alongside Mickey Mouse, or just making a Disney character give a thumbs up and “Happy Birthday, Billy”, required Disney's permission, professional artistic skill, and significant money. That gatekeeping is dissolving and I can’t imagine the positive impact it could have in people’s lives…apparently assuming Billy doesn’t get access to the prompt input first and ruin it for everyone.


The question is why it shows up on your Tiktok feed.


This used to be a "zing", but don't think it is anymore. Try to make a new profile somewhere and select a few topics of interest. You will get suggested the most engaging "relevant" content. For me, I made a cycling Instagram and my feed instantly got filled with girls showing of cleavage in lycra with cycling hashtags.


It was not meant as an attack on GP; it was meant as exactly this opportunity to question "the algorithm".


I swear to god Instagram is like the patriarchy speedrun.

If it finds out you're a woman, within mere minutes it's 100% "you're fat" "try this diet" "you've GOT to buy this viral dress on shein!!"

And if you're a man, it's boobs, ass, objectification, and products to make you feel more like a man.

The sheer velocity at which Instagram will shovel you into capitalist-patriachy++ is shocking.


...have you ever thought about the way you're using the app, then? Because I, personally, get nothing else other than dumb memes and posts from people I follow.


When I say this happens in mere minutes and to everyone I know, I mean it.

For the record, I don't use Instagram because it's basically always been toxic. It's one of the fastest ways to feel bad about your body and life.


No, stop with the stupid shaming. The point was that the algorithm pushes certain content on people, no matter what they actually want to see.


And I said that I do not have this issue and my current account is relatively fresh, having been made in May of this year.


I thought it was well known and generally accepted that the social medial companies push controversial click/rage bait to keep people “engaged”?


Look, I've gotten cartel beheadings and beatings on a YouTube search query for Jack Russell terriers.

Don't throw shade. If you haven't gotten "How the fuck did that get there?", consider yourself lucky I guess. Best I can figure, terriers have some unintentional shared vector space with much more unpleasant content.


terriers/terrorists


This is a good point. Instead of policing what trolls will use it for, the same AI should be able to detect racist content and prevent it from spreading


People saying this have not worked with Sora before. I challenge you to generate anything even close to that.


Tiktok has AI moderation tools that you are highly underestimating.


The people making these are good at more subtle forms of hate with coded language and indirect references.


the people making AI moderation tools are also good, and in my opinion more skilled than the abusers


So what? Similar videos and pictures have existed since the dawn of web.

Yes, AI enables people to produce these in higher fidelity, but I don't see how it is any different to Dolan MS Paint comics.

No one is going to think that Mickey doing lynching is official art, nor will they think that Mickey is a real person who has done that.


I plan to set up Immich so that I can have a central photo storage.

Apple Photos play poorly when you want to put the library on an external drive (and even more poorly when you want to put it on a networked drive).


Anything that's online and anything that has large multiplayer lobbies/worlds is inherently very low on the kid safety gauge, no?


Yeah, no. Nintendo, as the other poster said. Random stranger interaction == bad. Can’t verify age == bad.

Can only add your friends for chat? Is fine.

I feel like this same strategy is sane for adults also. Before the internet, we did fine making friends and playing games with people we actually know. So much of the awfulness of the modern online space comes from anonymous interactions with strangers. I don’t think human social connections are able to scale in the way the internet enables.


It's not even that stranger interaction is inherently bad, Nintendos Splatoon series is very multiplayer with strangers centric but it manages to stay safe for kids because of the lack of chat and user generated content for the most part


> Can only add your friends for chat? Is fine.

I'm not saying it's not fine (depending on the age), but you won't convince me it's safer then playing Sim City 3000. It is inherently less safe.


As the parent, I have complete control over who my children can connect with on Nintendo (not so with Roblox and others). That makes it safe, because I can double check with parent of said friend. It’s completely fine.


I beg your pardon! SimCity incipiently indoctrinates children to believe in crazy stuff like the 9-9-9 tax plan.

https://en.wikipedia.org/wiki/9%E2%80%939%E2%80%939_Plan


It can also indoctrinate them to becoming mass murderers by spawning huge zombie outbreaks.


nope, in many games there's no chat and any interaction between players is only within the rules of the game. it's very safe. you cannot stalk a kid or even know its a kid.

somebody mentioned nintendo platform, see that for example


Fair enough, but I think creativity can usually escape the box and make it possible to communicate within the limitations imposed by the game.

And if not, then what even is the value of playing online as opposed to locally with AIs?

If children want to play together with their friends, isn't it much better to spin them a Minecraft server or such for friends they know from real life instead of playing games limiting them by "very narrowly specific interactions" anyway?


No, there is no way. Yes it is hard to tell if it's ai. But humans play different. And also this is why anticheats exist.

> If children want to play together with their friends, isn't it much better to spin them a Minecraft server or such for friends they know from real life instead of playing games limiting them by "very narrowly specific interactions" anyway?

compared to roblox, sure. who would even argue against that?

but there are many games where it really don't matter. it's probably most games... car racing sim... football sim... strategy like civilization...


But most games have unrestricted chat, aside from maybe wordlist filters ...

Wow, runescape, call of duty, battlefield, ... Didn't and don't they still all have basically unrestricted chat? Sure they might not be expressly marketed to kids, but everyone I knew was playing wow and runescape in elementary school with no issues.


None of those games are made for children. Particularly not CoD and Battlefield.

That doesn't mean kids don't want to play them. But Roblox is pitching itself as a safe space for kids.


It goes the other way as well. It is dumb to run away from police when they stop you for minor infraction and face a very high chance of getting caught and getting into a major problem. At least I would hope that the penalties for running away are very serious.

The police officers don't know why you are running away and can reasonably expect that there is something wrong other than an unbuckled seat belt -> a kidnapped person, tons of drugs in the trunk, a wanted murderer driving, etc.

Well at least in my country where chases are rare. I understand in US it is difficult since people are more eager to run away.


> It goes the other way as well. It is dumb to run away from police when they stop you for minor infraction and face a very high chance of getting caught and getting into a major problem

Right, people are dumb. You can't just throw your hands in the air and declare a problem unsolvable because people are dumb and keep acting against their best interest; you acknowledge that fact and change tact accordingly. If it turns out that trying to pull people over for minor infractions causes 1% of those incidents to turn into violent chases then you should stop pulling people over for minor infractions and figure out a safer way to ticket them. At the very least you shouldn't chase after them in your car and add another dangerous vehicle to the road. It reflects a mindset of "get and punish the bad guys" being prioritized over "improve safety of your community," which pretty much sums up the culture problem with American police and criminal justice in general.


Not that it matters, but we would get the same information if he was able to show it to his son that time.


Might even be better to go down at the same time as everyone else, because customers might be more lenient on you.


I like this. I'm currently working on a (simple) iOS game, mostly because I got fed up with all of the dark patterns that are so highly prevalent on the market.

I'm even thinking about naming it something like `Pay Upfront: Strategy Game` to underline the single purchase model, but perhaps it's silly to go that far?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: