Hacker Newsnew | past | comments | ask | show | jobs | submit | RIMR's commentslogin

The president has no authority to rename the Department of Defense, but he and his administration demand consensus under the threat of legal consequences.

Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.

There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.


[flagged]


It means something violates the law. Am I right?

[flagged]


Renaming the DoD does directly contradict the National Security Act of 1947, which renamed the Department of War to the Department of the Army, and put it under the newly named Department of Defense.

Cool.

No renaming happened though.

By the way, your illegal use of the term "DoD" to refer to the Department of Defense is pretty shocking. This isn't authorized by the Act of 1947.


The National Security Act of 1947, as amended on August 10, 1949, establishes the name of the executive department overseeing the military as the Department of Defense.

Great.

Where does it prohibit alternative names?


That would be a significant free speech violation, so it doesn't.

However, the idea that an "alternative name" should be espoused by the executive branch means that they do not believe Congress should set the name of the department. Which is a point of contention, as Congress set the name about sixty years ago. The act was already amended for a rename in 1949. The problem isn't the name. The problem is the idea behind renaming it unilaterally: the idea the President has more authority than Congress.


Someone with 1200 points after 14 years on HN shouldn’t be pointing out green noobs, especially when they are being very reasonable with their comments and you’re objectively wrong.

You used “green account” like a slur.


No, I should point out new accounts that are objectively wrong that are trying to stir up division and hate.

As should you, if you weren't in a similar position to them. Which it seems like you are?


Your comments are all flagged, dead, or downvoted to irrelevance in this thread, it’s clear you’re wrong, go get educated.

Damn, the IDF got this guy mid-sentence...

WHAT DOES HE KNOW FOR SURE???

Especially for a tool that only work on macOS and iPhone, and only serves one purpose.

Pretty much every developer out there has some kind of tooling that does this already, that also does more.

This is a cool little project, but I cannot imagine paying for it.


I am not a big fan of AI-generated educational content, mostly because it's a great way to confidently learn falsehoods and misconceptions. I would prefer to learn from a reliable and reputable source.

I am also not a big fan of trying to beat doomscrolling. One of the defining properties of doomscrolling is that it is mindless and addicting. The moment you try to create a mindful, healthy alternative, you've already lost. No product will ever beat doomscrolling, only individuals dedicated to their own mental health are capable of clearing this hurdle.


This was a couple of years ago, but I remember using ChatGPT to try and study for a certification by generating quiz questions.

It would always start to make every correct answer option "C" over time, no matter what I tried. Eventually I was so focused on whether or not it was stuck in a "C" loop that I started overthinking all of the questions and wasting time.

Flash forward to testing Sonnet 4.6 recently to try and see if it could effectively teach me something new, I got about 5 prompts in before I had to point out an oversight, and it gave me the classic "you're absolutely right, ignore that suggestion".

This is anecdotal of course, but at least LLMs are helping to build my skills of fact verification and citation checking!


Oh wow, a true statement on a government website. I'm sure they'll take it down within a day.

Maybe a way to game the modern right is to draw attention to something true so that they remember it exists, then they try to censor it, thereby triggering a Streisand effect.

I mean, yeah, if you specifically like lighting off fireworks at the gas station, you should buy your own gas station, make sure it's far away from any other structures, ensure that the gas tanks and lines are completely empty, and then do whatever pyromanic stuff you feel like safely.

Same thing with OpenClaw. Install it on its own machine, put it on its own network, don't give it access to your actual identity or anything sensitive, and be careful not to let it do things that would harm you or others. Other than that, have fun playing with the agent and let it do things for you.

It's not a nuke. It can be contained. You don't have to trust it or give it access to anything you aren't comfortable being public.


There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.

> There's absolutely no way to contain people who want to use this for misdeeds.

There is no practical way to stop someone from going to a crowded mall during Christmas shopping season and mowing people down with a machine gun. Yet, we still haven't made malls illegal.

> ... if they are allowed to continue.

You may have a fantastic new idea on how we can create a worldwide ban on such a thing. If so, please share it with the rest of us.


If you can come up with a technical and legal approach that contains the misdeeds, but doesn't compromise the positive uses, I'm with you. I just don't see it happening. The most you can do is go after operators if it misbehaves.

I've been around since before the web. You know what made the Internet suck for me? Letting people act anonymously. Especially in forums. Pre-web, I was part of a local network of BBS's, and the best thing about it was anonymity was simply forbidden. Each BBS operator in the network verified the identity of the user. They had to post in their own names or be banned. We had moderators, but the lack of anonymity really ensured people behaved. Acting poorly didn't just affect your access to one BBS, but access to the whole network.

Bots spreading crap on the web? It's merely an increment over the problem of allowing anonymous users. You can't solve one while maintaining anonymity.


I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.

Again, I'm not disagreeing with the harm.

But I think drawing the line of banning AI bots is highly convenient. If you want to solve the problem, disallow anonymity.

Of course, there are (very few) positive use cases for online anonymity, but to quote you: "I don't care about the positive uses." The damage it did is significantly greater than the positives.

At least with LLMs (as a whole, not as bots), the positives likely outnumber the negatives significantly. That cannot be said about online anonymity.


Okay, but what are you actually proposing? This genie isn't going back in the bottle.

At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.

There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.


> should be entitled to damages from the company authoring the model.

1. How will you know it's a bot?

2. How will you know the model?

Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

> These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Ouch. Throw due process out the door!

> Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.


> 1. How will you know it's a bot? > 2. How will you know the model?

Sounds like a problem for the platforms and model vendors to figure out!

> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.

> Ouch. Throw due process out the door!

There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.

> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!

> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!


> Sounds like a problem for the platforms and model vendors to figure out!

> sounds like a problem for the platforms to figure out!

You'd have to fundamentally change how the Internet works to be able to figure these things out. To achieve this, you'd need cooperation from everybody, not just LLM providers.


> I don't care about the "positive" uses.

You should have stopped there.


Hey, so this pretty much looks like a Tailscale rip-off. Not a competitor, but a straight-up ripoff in the worst possible way.

Tailscale has LONG used Pangolins as a mascot (https://tailscale.com/blog/network-pangolins). They even run a "Pangolin Enthusiast" website (https://tailandscales.com/) that is essentially a demo site for their tutorials. This is an animal with a tail and scales! The branding is very good.

You clearly chose this name to deliberately create brand confusion with Tailscale, and to derail some of their marketing and community branding in your favor. That's scummy behavior by developers who don't have anything to offer but a copy of someone else's work.

If am going to choose between Tailscale, and a project opportunistically attempting to impersonate Tailscale, why would I ever choose the scummy impersonator?


It would be really helpful if I knew how this thing was configured.

I am certain you could write a soul.md to create the most obstinate, uncooperative bot imaginable, and that this bot would be highly effective at preventing third parties from tricking it out of secrets.

But such a configuration would be toxic to the actual function of OpenClaw. I would like some amount of proof that this instance is actually functional and is capable of doing tasks for the user without being blocked by an overly restrictive initial prompt.

This kind of security is important, but the real challenge is making it useful to the user and useless to a bad actor.


I mean, it's literally a repo belonging to NEAR AI.


Optimism is a luxury for those who won't be the ones paying for the mistake.


I'm optimistic that my favorite team will play well this season.

I ain't paying for shit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: