I felt this way until I worked at Essential, the now shut down phone maker.
One of the core company design principles was to have minimal branding. The phone design had no logo. When you show it to people they are looking for the identifier and it's completely missing. When you are in a market where most of the products look largely the same, branding is very important.
I don't doubt your experience, but out of curiosity, what led you to the conclusion that branding is "absolutely necessary"? It seems to me that the presence of a name on black rectangle wouldn't matter to the average consumer. I get that there's a percentage of the consumer market that wants to advertise that they're rocking an Apple/Samsung/whatever device, but I would think that price/performance would be the major factor. Of course, that's probably why no one has ever put me in charge of marketing strategy.
Ironically, because phone designs have converged so much and innovation had plateaued, a phone with no branding whatsoever represents a brand in and of itself. Think about it, even the worst, cheapest no-name phones built from old parts in some small Chinese factory have something written on them. The absence of a logo is so unique that it performs the same function, because no one else is doing that. The catch is that it only works if people know this gimmick in advance, but this also applies to brands with textless logos.
It doesn't make sense for every company to make their own Salesforce clone.
The key is that it makes new companies entering the market to compete with Salesforce immensely easier. More competition will just force lower overall margins in SAAS.
I don't know, I mean for most SaaS products this is true. But for something like Salesforce, the feature set is incredibly broad. The coding is not hard, so much as it is just an enormous volume of code.
It was never the only hard part, but it definitely was a hard part (at least in most cases; obviously there are some monopolies with relatively simple software - mostly where there are network effects like WhatsApp).
But give me the source code for something competitive with Solidworks, Jasper Gold, FL Studio, After Effects, etc. and I'm sure as hell making a business out of it!
Furthermore while good software may not guarantee business success, it is pretty much a requirement. I have seen many projects fail because the software turned out to be the hard part.
Yeah, but its still usually cheaper to pay for software than build and support it. I think that will be true for a long time going forward, its just that you can't plan on extracting a ransom for your SAAS.
We're not going to see the end of software; we're going to see the end of margins.
I don't know for sure, but I suspect that other industries have experienced this. Would love to know which. Photography comes to mind, but I'm sure there are more meaningful examples.
But a query optimizer only matters once you have an established business with large customers.
You seem to be implying Salesforce’s business is successful because they have their own query optimizer. But the causality is reversed. Salesforce has their own query optimizer because they’ve built a successful business.
My point is that a lot of people think it'd be really easy to build the next Salesforce until they actually try to compete with Salesforce in the market. Like it or not, if you want to build a Salesforce competitor (or try to get your company to build its own) you're going to be compared to actual Salesforce, not the version of Salesforce that existed when the market was new.
After hearing this 10 times a day for the last 5 years I'm starting to get a bit tired. Do you have a rough time for when this great replacement is coming? 1 year? 2? 5? If it's longer than that can we shut up about it for a few years please.
A poor economy that is still dealing with a decade+ of ZIRP, COVID shock, tariffs, and political strife; I don't see how AI has much, if anything, to do with this when compared with other options.
If AI was truly this productive they wouldn't be struggling so hard to sell their wares.
arbitrarily large means like measured in square km. Starcloud is talking about 4km x 4km area of solar panels and radiative cooling. (https://blogs.nvidia.com/blog/starcloud/)
Building this is definitely not trivial and not easy to make arbitrarily large.
When a physicist says arbitrarily large it could even be in a dimensionless sense. It doesn't matter how small or large the solar panel is:
for a 4 m x 4 m solar panel, the height of the pyramid would have to be 12 m to attain ~ 300 K on the radiator panels. Thats also the cold side for your compute.
for a 4 km x 4 km solar panel the height of the pyramid would be 12 km.
I strongly disagree. I’ve yet to find an AI that can reliably summarise emails, let alone understand nuance or sarcasm. And I just asked ChatGPT 5.2 to describe an Instagram image. It didn’t even get the easily OCR-able text correct. Plus it completely failed to mention anything sports or stadium related. But it was looking at a cliche baseball photo taken by an fan inside the stadium.
I have had ChatGPT read text in an image, give me a 100% accurate result, and then claim not to have the ability and to have guessed the previous result when I ask it to do it again.
I'm still trying to find humans that do this reliably too.
To add on, 5.2 seems to be kind of lazy when reading text in images by default. Feeding it an image it may give the first word or so. But coming back with a prompt 'read all the text in the image' makes it do a better job.
With one in particular that I tested I thought it was hallucinating some of the words, but there was a picture in the picture with small words it saw I missed the first time.
I think a lot of AI capabilities are kind of munged to end users because they limit how much GPU is used.
1) Is it actually watching a movie frame by frame or just searching about it and then giving you the answer?
2) Again can it handle very long novels, context windows are limited and it can easily miss something. Where is the proof for this?
4 is probably solved
4) This is more on predictor because this is easy to game. you can create some gibberish code with LLM today that is 10k lines long without issues. Even a non-technical user can do
I think all of those are terrible indicators, 1 and 2 for example only measure how well LLMs can handle long context sizes.
If a movie or novel is famous the training data is already full of commentary and interpretations of them.
If its something not in the training data, well I don't know many movies or books that use only motives that no other piece of content before them used, so interpreting based on what is similar in the training data still produces good results.
EDIT:
With 1 I meant using a transcript of the Audio Description of the movie. If he really meant watch a movie I'd say thats even sillier because well of course we could get another Agent to first generate the Audio Description, which definitely is possible currently.
Just yesterday I saw an article about a police station's AI body cam summarizer mistakenly claim that a police officer turned into a frog during a call. What actually happened was that the cartoon "princess and the frog" was playing in the background.
Sure, another model might have gotten it right, but I think the prediction was made less in the sense of "this will happen at least once" and more of "this will not be an uncommon capability".
When the quality is this low (or variable depending on model) I'm not too sure I'd qualify it as a larger issue than mere context size.
My point was not that those video to text models are good like they are used for example in that case, but more generally I was referring to that list of indicators. Like surely when analysing a movie it is alright if some things are misunderstood by it, especially as the amount of misunderstanding can be decreased a lot. That AI body camera surely is optimized on speed and inference cost. but if you give an agent 10 1s images along with the transcript of that period and the full prior transcript, and give it reasoning capabilities, it would take almost endlessy for that movie to process but the result surely will be much better than the body cameras. After all the indicator talks about "AI" in general so judge a model not optimized for capability but something else to measure on that indicator
reply