Hacker Newsnew | past | comments | ask | show | jobs | submit | SCHiM's commentslogin

> Your company can scream to anyone that listens that all the competition is AI SLOP, but when hundreds of companies are pitching the same solution, your one voice will get lost.

If you cannot out compete "AI SLOP" on merit over time (uptime? accuracy? dataloss?), then the AI SLOP is not actually sloppy...

If your runway runs out before you can prove your merit over that timeframe, but you are convinced that the AI is slop, then you should ship the slop first and pivot onec you get $$ but before you get overwhelmed with tech depth.

Personally, I love that I can finally out compete companies with reams of developer teams. Unlike many posters here, I was limited by the time (and mental space) it takes to do the actual writing.


It certainly seems possible that AI slop could be flawed in some major ways while still competing well in the market: security is usually invisible to users until it isn’t, similar uptime and bugs, accessibility can be ignored if you don’t mind being an unethical person.

Then again this is also often a flaw with human-generated slop, so it is hard to say what any of this really means.


> accessibility can be ignored How good are AI-assisted accessibility tools now? Is the poison also the cure here?


I don't think it matters for developers. They compete in the short term.


I guess the point is that startups are dead because scaling up becomes harder, doesn’t mean that organic growth is harder. In fact, the potential ways forward offered by the article are not really dependent on VC funding.


What company did you outcompete


But you're not just trying to out compete one AI slop, you must compete with ALL of them. And over time the AI slop to thoughtful company ratio is only going to increase


Your comment encourages me to make an AI SLOP version of a product I had in mind.


IMO it's shoddy. Anybody can get hacked, that's true. But a modern corp that has tried to defend itself should have multiple layers of defenses against complete pwnage.

If you've paid attention in the last 10 (or even 5) years as a company, and did some pentests and redteams, you've seen how you could be breached, and you took appropriate steps years ago.

A non-shoddy company will have:

- hardened their user endpoints with some sort of modern EDR/detection suite.

- Removed credentials from the network shares (really).

- Made sure random employees are not highly privileged.

- Made sure admin privileges are scoped to admin business roles (DBA admin is not admin on webservers, and vice-versa).

- Made sure everyone is using MFA for truly critical actions and resource access.

- Patched their servers.

- Done some pentests.

This won't stop the random tier 2 breach on some workstation or forgotten server still hooked up on prod/testing, but it will stop the compromise _after_ that first step. So sure, hackers will still shitpost some slack channel dumps, but they won't ransomware your whole workstation fleet...


I guess you forgot the most important part: making sure your security and devops teams and people in company management follow exactly the same protocol as everyone else with no exception.

Because big bosses hate it when their PC don't just let them run whatever they want and they are not allowed to VPN into network from their home or their grandma desktop because they like her very much.

Also any Linux nerd sysadmin dude (like me) who know better is another type of person who hate following rules.


In these times of ransomware, also (off-site) backup / restore / disaster recovery.


Could you explain:

1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?

2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?


Thanks for your questions! 1) The platform provides a control plane to help you deploy the cluster on your own Hetzner account, so you are in control of resources and pay direct usage costs to Hetzner. 2) Because you have full access to kubernetes cluster and it runs on your own Hetzner account, the security of the cluster is a shared responsibility and you can fine tune the configuration according to your requirements. The platform security is totally our responsibility. We try to follow best practices and internal penetration tests were conducted, but we're still in beta and try to see if there's interest for such product before launching the stable version.


It's down. Tested from two servers, 8.8.8.8 and others are up.


Even big customers have a use for what you've built in high security areas they might have. Think swift alliance servers in a specialized network segment in financials, or perhaps sensitive medical information in health care?

I think you should not have any issues integrating with legacy AD, but know bigger enterprises have mostly moved to online IdPs. Integrating with legacy AD will make your product also likely less secure. Maybe not the way to go?


For anyone else wondering what IdPs are:

> What is an identity provider (IdP)? > > An identity provider (IdP) is a service that stores and verifies user identity. IdPs are typically cloud-hosted services, and they often work with single sign-on (SSO) providers to authenticate users.

Read a full explanation at: https://www.cloudflare.com/learning/access-management/what-i...


The following works partially:

``` netsh interface ipv6 show interfaces ```

Get your interface id first, you're looking for the IDX number. There might be several.

ping ff02::1%LAN_INTERFACE_ID

So, example:

``` ping ff02::1%22 ```

Windows ping wrt the firewall is not very smart, it won't let the response packets through. So you need to disable your firewall to see systems responding.

Sadly, ping won't display the src address. It will state that "ff02::1%22" responded... But if you look in wireshark you can tell the other systems on your network received and responded to the packet.


In all honesty, I have the same reservations. If you look at the authz schemes between the different flavors of operating systems you see that the 'set-uid' concept is comparatively ancient, battle hardened and based on well understood mechanisms.

This new functionality in Windows looks complicated. There's an architectural picture that involves:

* Multiple processes

* Windows RPC (On the basis of RPC? DCOM?)

* Handle inheritance

* Process integrity(?)

* Token privileges(?)

When UAC was introduced, there was a slew of bugs in the underlying RPC mechanism. I wonder if it will be the same. Can't wait to take a look at this in the debugger :)

I also wonder if MSRC will consider this a "security boundary". Based on the fact that the text references process integrity(UAC), and that _is not_ a security boundary, I'm going to guess not. That means that this could potentially introduce bugs, but MSRC will not be handing out bounties to fix things. Which means that any bugs people find are less likely to be reported, and more likely to find their way into ransomware down the line.


Not to mention this is an attack on multiple fronts: do you _really_ believe DoH is about privacy?


It both is and it isn't. It moves people from different threat models to a middle ground, and for some people that's worse and for some people it's significantly better.

If you live in the U.S. for most major ISP users it's probably a wash and you're just letting a different party have your info. If you live in a nation that desires to control what you consume online then it probably is a net gain.


dnscrypt.org will shuffle between multiple encrypted resolvers. So no single party has your info.


Isn't that worse though since your data would be spread across multiple potentially untrustworthy parties instead of just a single one? Given how many requests a browser makes and normal browsing habits like regularly visiting the same sites, it would eventually lead to each resolver having a full profile of what you browse.

That makes me think of the reason Tor has Entry Guards where the first hop is limited to just a few selected nodes instead of random: https://support.torproject.org/about/entry-guards/


That is actually possible. For example, someone wrote python code to do this for the massive open source model BLOOM.

However, it's still slow as tar. When I was running the BLOOM model I think my inference time was 1 token / m.

See: https://towardsdatascience.com/run-bloom-the-largest-open-ac...


Maybe this is that AI's endgame, and it just took full control of openAI's compute through a coup at the top?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: