Hacker Newsnew | past | comments | ask | show | jobs | submit | deepGem's commentslogin

I built something similar at a hackathon, a dynamic teleprompter that adjusts the speed of tele-prompting based on speaker tonality and spoken wpm. I can see extending the same to an improv mode. This is a super cool idea.

Isn't that such a great outcome. No more robotic presentations. The best part is that you can now practice Improv at the comfort of your home.

And this product will work great for any industry... can I get a suggestion for an industry from the crowd?

Audience: Transportation... Education... Insurance...

Speaker: Great! I heard "Healthcare".

Right... as we can see from this slide, this product fits the "Healthcare" industry great because of ...


Caro’s first LBJ biography tells of how the future president became a congressman in Texas in his 20s, by carting around a “claque” of his friends to various stump speeches and having them ask him softball questions and applauding loudly after

Well, hey, who needs friends?


This is a huge one. What Musk is looking for is freedom from land acquisition. Everything else is an engineering and physics problem that he will somehow solve. The land acquisition problem is out of his hands and he doesn't want to deal with politicians. He learned from building out the Memphis DC.


Maybe, but I'm skeptical, because current DCs are not designed to minimize footprint. Has anyone even built a two-story DC? Obviously cooling is always an issue, but not, directly, land.

Now that I think of it, a big hydro dam would be perfect: power and cooling in one place.


> Has anyone even built a two-story DC?

Downtown Los Angeles: The One Wilshire building, which is the worlds most connected building. There are over twenty floors of data centers. I used Corporate Colo which was a block or two away. That building had at least 10 floors of Data Centers.


I think Downtown Seattle has a bunch too (including near Amazon campus). I just looked up one random one and they have about half the total reported building square footage of a 10-story building used for a datacenter: https://www.datacenters.com/equinix-se3-seattle


Multistory DCs are commonplace in major cities.


> Has anyone even built a two-story DC?

Every DC I’ve been in (probably around 20 in total) has been multi storey.


Skepticism is valid. The environmentalists came after dams too.


So freedom from law and regulation?


[flagged]


So why does he not build here in Europe then? Getting a permit for building a data center in Sweden is just normal industrial zoning that anyone can get for cheap, there is plenty of it. Only challenge is getting enough electricity.


I meant Europe is an example of how not to do regulation. The problem you just mentioned. If you get land easily electricity won't be available and vice versa.


Then maybe you should move here. We have in most cases well functioning regulations. Of course there are counter examples where it has been bad but data centers is not one of them. It is easy to get permits to build one.


Why is it an example? Can you cite any case where "regulation" trumpled the construction of a properly designed datacenter?

Or what you meant was "those poor billionaires can't do as they please with common resources of us all, and without any accountability"?

As a quick anecdote, there is a DC in construction in Portugal with a projected capacity of 1.2GW, powered by renewables.


There's also a bunch of countries pretty much begging companies to come and build solar arrays. If you rocked up in Australia and said "I'm building a zero-emission data center we'll power from PV" we'd pretty much fall over ourselves to let you do it. Plus you know, we have just a bonkers amount of land.

There is already a Tesla grid levelling battery in South Australia. If what you're really worried about is regulations making putting in the renewable energu expensive, then boy have I got a geopolitically stable, tectonically stable, first-world country where you can do it.


Where a random malicious president can't just hijack the government and giga-companies can't trivially lobby lawmakers for profits at the expense of citizens?


A random malicious president ? Who was democractically voted by more than 70% of the country ?


> Not all law and regulation is created equal. Look at Europe.

You're spot on but you are not saying what you think you're saying)


He "learned" by illegally poisoning black people

> an engineering and physics problem that he will somehow solve

no he won't


[flagged]



Thank you. This is really nasty. Boxtown residents should sue xAI and take them to court.


I'm confused, wouldn't this be just using the power of the government to enforce short-sighted, tech-hostile regulations like "datacenters should not poison people"?


What will eventually pan out is that senior devs will be replaced with junior devs powered by AI assistants. Simply because of the reasons you stated. They will ask the dumb important questions and then after a while, will even solve for them.

Now that their minds are free from routine and boilerplate work, they will start asking more 'whys' which will be very good for the organization overall.

Take any product - nearly 50% of the features are unused and it's a genuine engineering waste to maintain those features. A junior dev spending 3 months on the code base with Claude code will figure out these hidden unwanted features, cull them or ask them to be culled.

It'll take a while to navigate the hierarchy but they'll figure it out. The old guard will have no option but to move up or move out.


Why would Claude code help you find unused features? The end customer uses features, not the AI. I would want to know from the end customer whether a feature is unused, and a Junior with an LLM assistant is not going to be able to tell me that without adding new features to the code base.


Am using Claude code as an approximation here. 2 years down the line the tooling around analytics will get integrated in AU assistants and they will be absolutely able to figure out unused features.


How do you suppose the old guard are filling their days now?

At some level, aren’t you describing the age-old process of maturing from junior to mid level to senior in most lines of work, and in most organizations? Isn’t that what advancing in responsibility boils down to: developing subtlety and wisdom and political credibility and organizational context? Learning where the rakes are?

I wish 3 months, or even 3 years, were long enough to fully understand the whys and wherefores and politics of the organizations I cross paths with, and the jungle of systems and code supporting all the kinds of work that happen inside…


There is no competing product for GPT Voice. Hands down. I have tried Claude, Gemini - they don't even comes close.

But voice is not a huge traffic funnel. Text is. And the verdict is more or less unanimous at this time. Gemini 3.0 has outdone ChatGPT. I unsubscribed from GPT plus today. I was a happy camper until the last month when I started noticing deplorable bugs.

1. The conversation contexts are getting intertwined.Two months ago, I could ask multiple random queries in a conversation and I would get correct responses but the last couple of weeks, it's been a harrowing experience having to start a new chat window for almost any change in thread topic. 2. I had asked ChatGPT to once treat me as a co-founder and hash out some ideas. Now for every query - I get a 'cofounder type' response. Nothing inherently wrong but annoying as hell. I can live with the other end of the spectrum in which Claude doesn't remember most of the context.

Now that Gemini pro is out, yes the UI lacks polish, you can lose conversations, but the benefits of low latency search and a one year near free subscription is a clincher. I am out of ChatGPT for now, 5.2 or otherwise. I wish them well.


Just a note, chatGPT does retain a persistent memory of conversations. In the settings menu, there's a section that allows you to tweak/clear this persistent memory


I found the gemini cli extremely lacking and even frustrating. Why google would choose node…

Codex is decent and seemed to be improving (being written in rust helps). Claude code is still the king, but my god they have server and throttling issues.

Mixed bag wherever you go. As model progress slows / flatlines (already has?) I’m sure we’ll see a lot more focus and polish on the interfaces.


Codex is king


What's that near free subscription? I don't see it here


They had 9.99 for the first year.


Oh I must have missed that, thanks.


yeah, the best Ive seen is like 1.99 for two months, then back to normal pricing....


Product doesn't see the point of engineers being engaged and feed the engineering team like an in-house outsourcing shop.

Because they want to feel superior as the ‘this was my idea and you executed on my idea’ nonsense. Their answers to most ‘why are we doing this ?’ ‘trust me bro’. I am perhaps generalizing and there are outlier product managers who have earned the ‘trust me bro’ adage, but most haven’t.

This PM behaviour will never change. Engineers have said enough is enough and are now taking over product roles, in essence eliminating the communication gap.


This resonates very deeply with my experiences.

> Their answers to most ‘why are we doing this ?’ ‘trust me bro’.

I've profoundly annoyed so many PMs asking this question, I don't get it. I believe it's because they don't want to admit it's because it's an exec's-idea-of-the-week rather than market/biz/customer research and analysis.

> Engineers have said enough is enough and are now taking over product roles

Fingers crossed. It's about time we up our communication- and managing-upwards skills. I feel many PMs are sustaining their roles just because they're sycophantic yes-men to the execs, because execs got tired of engineers saying "no".

Having read a few criticms of PMs on HN, I can imagine the "your companies just didn't hire the good PMs" comments incoming


> Having read a few criticms of PMs on HN, I can imagine the "your companies just didn't hire the good PMs" comments incoming

Everything you said in your post is true, especially about 90% of PMs being presenteeist yes men. Indeed most PMs are at best a waste of time, and at worst a net negative to the company and anything they touch.

However, a good PM is worth their weight in gold. I maintain the cynical view that 80% of the work done at any large company is useless. That's why a good PM is so invaluable

A good PM is the difference between your project aimlessly spinning its wheels and changing directions for 8 quarters (like most projects), or relentless execution with full focus and rewards from higher-ups.

Clarifying what executives want, nudging their worst impulses towards something more productive, maintaining focus and clear communication amongst multiple teams with competing priorities, working with engineers to design features and schedule them realistically on the road map, exploring the company beyond your current team to find impactful projects to work on or to join forces with... All these things are exhausting, painstaking, and take a level of attention to technical details and human affairs which most of us don't have the patience or energy to deal with. It's more than a full time job.

But if a PM does it successfully, you actually ship important stuff, and that stuff is so important that it moves the whole company forward, and improves the bottom line so much that no one can ignore it. And that's why the PM role continues to exist, despite most of its practitioners being useless suckups. The impact just one PM can deliver by shipping a successful and important project at a large company outweighs all the useless baggage that is the rest of their colleagues. And that's why you continue to invest in your PM org, and hope you get a few nuggets of gold amongst all those turds.


Yes that's a nice sales pitch for PMs to exist. I have never criticised PMs as being totally pointless. I've worked with some decent enough ones.

I'm guessing you're a PM or have been? You did the classic 'opening agree to disarm, then disagree with a long sales pitch' that they're so good at ;)

It seems to me you're vastly overselling the impact of even the 50th percentile of PMs

I think project management is useful (I'm not going to get in the weeds of PM vs PM vs PO etc). But I feel we've over-promoted classic project managers into roles driving product direction without them having the experience or acumen for it.


I'm an engineer. I don't think I'm overselling the impact of them, I called the majority of them useless lol.

I agree with everything else you wrote. Especially the bit about over promoted project managers. That's my thinking as well.


I would any day take chatGPT/Claude over an IBM consultant. I worked at IBM.


I'd rather be slapped in the face than kicked by a horse, but that doesn't mean either is a good thing


Precisely.


You could have both worlds: an LLM model by IBM https://huggingface.co/ibm-granite/granite-4.0-h-small

It wasn't very promising when it came to benchmarks though, go figure: https://artificialanalysis.ai/leaderboards/models


"I don't "understand" how LLMs "understand" anything."

Why does the LLM need to understand anything. What today's chatbots have achieved is a software engineering feat. They have taken a stateless token generation machine that has compressed the entire internet's vocabulary to predict the next token and have 'hacked' a whole state management machinery around it. End result is a product that just feels like another human conversing with you and remembering your last birthday.

Engineering will surely get better and while purists can argue that a new research perspective is needed, the current growth trajectory of chatbots, agents and code generation tools will carry the torch forward for years to come.

If you ask me, this new AI winter will thaw in the atmosphere even before it settles on the ground.


What I really fail to understand - how can departments like BLS screw up to this extent. Either they are grossly incompetent or they are intentionally corrupt.

The data covers the period from March 2024 to March 2025 and trims the average monthly jobs gains seen during this period (roughly the last 10 months of Joe Biden's presidency and the first two months of Trump's) from a monthly average of 147,000 to about 71,000.

50% error. This is more or less consistent. How can a department have this error % and still have their job. I understand the data collection mechanism is not the most sophisticated, but even accounting for that, this consistent error % is not to be overlooked.

I wonder why there is such lack of accountability from firms whose data pretty much feeds the world's economy.


There's a little bit of a philosophical thing here -- do you adjust the earlier measurement by some function because it's usually been high and revised downwards? If you do, you need to let every user know that you're doing something different, etc.

The worst case is that both the statistics orgs and the users are adjusting the numbers for a bias and overshooting.

This means there's a certain inertia: it can be better to handle the interim reports the same, even if they've been biased one way for several years, than to introduce a change that makes the numbers not comparable to history.

> 50% error.

It's not a 50% error; it's a 50% error in the magnitude of the change.

That's like saying that my room increased from 71.4 to 71.6 degrees, but my thermometer only saw an increase from 71.4 to 71.5; therefore, my thermostat has a 50% error.


This means there's a certain inertia: it can be better to handle the interim reports the same, even if they've been biased one way for several years, than to introduce a change that makes the numbers not comparable to history.

This is a very interesting point. So if BLS suddenly became more accurate, all the agencies have to re-tune their own biases and corrections => Could lead to short term discrepancies.

What one sees as inefficiency is actually efficient from a totally different lens.


Yup. Obviously you want to fix the bias in the early version of the numbers eventually.

But you don't want to change what you're doing all the time, so you stay an easy product for everyone else to use.

(Interesting that this "overreport jobs in the preliminary numbers" bias has showed up; in older data using similar methodology it didn't exist, but now it seems to...)


It’s a survey, there are a lot of non-responses (getting worse) and late responses. They try to correct for this, and that normally works, but when “weird” things are going on the corrections can be pretty wrong. The people who use these numbers understand all of this and it’s fine. It’s just the popular media that freaks out.

The quarterly numbers come from better data sources (tax withholding, unemployment insurance payments, etc)


For example, if a small business goes out of business and fires everyone, they probably won’t respond to the survey. If the rate of small business failures is not what we normally see (more of less is absolute of relative numbers) it can create a bias that throws the models off.


Let me guess, they are calling landlines. And no one picks up unknown numbers on cell phones. They are hopelessly behind the times


It’s a survey of business, so they have a list, I think the businesses are enrolled to take part. It’s not a random phone survey. I bet it’s email.


You might want to read up on the process a little more before forming such strong opinions. The BLS is extremely transparent, at least for now. The earlier reports are intentionally more noisy because there is value is being fast and then revising later, and everyone who uses this data is aware of that.

Calling this a 50% error rate is simply wrong. If an earlier report said a single job had been created and that was later revised to two jobs, that would be super humanly accurate and yet you would be calling for everyone to be fired over the 100% error rate.


The earlier reports are intentionally more noisy because there is value is being fast and then revising later, and everyone who uses this data is aware of that.

I get it and yeah my tone is very exaggerated. I don't think anyone in BLS should be fired and whoever is suggesting that does not understand how public institutions work.

I am just curious why there is so much of a discrepancy. This has been pretty much the status quo in BLS for a long time. They issue numbers and then they revise them later. However, you'd expect the revision to be moderately within an error %age.

Also how will this retroactive change help everyone involved. Ok, the new job numbers reflect a gloomier past (or a more vibrant past) how is that even helping everyone who is so focused on 'what's going to happen tomorrow'.

I retract my stance about BLS being intentionally corrupt - that's uncalled for.


I disagree. The BLS head was either incompetent or corrupt. The two are not mutually exclusive so they easily could have been both. Firing the BLS head was the obvious choice and it sends the right message to all involved that we will not accept this margin of inaccuracy when it impacts our economy. We need a BLS head with the balls to raise hell when the numbers are not reflecting reality.


They rely on reporting from employers, and employers don’t report that data quickly enough, so there’s an amount of estimation.

Over time they get better numbers relating to previous quarters and they revise their numbers.

Also employers can report revised numbers for a quarter, to make corrections.


There are ~160 million working adults in the US. Jobs numbers being off by 100k in either direction doesn't seem that bad when you consider that.


They are neither.

Revisions and surprises are routine. Data comes in gradually, but estimates are useful even before all data has arrived. Early data is based on business reporting and businesses that report on-time aren't necessarily representative of all businesses. Those people who use this data know this, and prepare for revisions.

I hope this helps and you understand better. Anyone here who still thinks this is still incompetence or corruption because surveys come late?

>I wonder why there is such lack of accountability from firms whose data pretty much feeds the world's economy.

Create punishment system? Unless compaies report data back to BLS very fast, they pay big fee or are taxed higher. Small shops would hate it.


Create punishment system? Unless compaies report data back to BLS very fast, they pay big fee or are taxed higher. Small shops would hate it.

Or incentivize companies to report accurate data pretty fast. Payroll management systems can be plugged in real time, but that costs money and yeah small businesses are not going to be happy. So incentivization works better than punishment I think.


I feel like the IRS has near real time employment data simply based on tax withholding payments they receive every 2 to 4 weeks.


That comes out quarterly


Yes, and to add, it would count anyone who had any wages in that quarter, not necessarily those who are still employed.

https://www.irs.gov/pub/irs-pdf/f941.pdf


People who don't do statistics and don't understand how the BLS gets their data think there must be corruption when they correct previous reports. Think of it this way, if they correct their previous estimates with new data, isn't that a sign they are NOT corrupt? Why would a corrupt organization correct their previous reports?

I advise you to do a little reading on how these reports are corrected. People relying on them understand how they work. People freaking out about them don't.


One factor is that a part of their measurement is tracking how many new companies are founded and estimating how many employees a new company is likely to have. These numbers have been trending down and rapidly fell due to a lot of new companies being one person LLCs for gig economy work. It appears they haven't kept up with this trend so they overestimate how many jobs these companies will have.


> how can departments like BLS screw up to this extent

My null hypothesis might be that the BLS works for the government, so how can they not be under (implicit) pressure to goal-seek their figures.

Once the figure has been published, and widely reported, it can be revised downwards months later, few will care. The system may be broken by design.


This is the furthest thing from a null hypothesis. This is dressing your conscious biases in sciensism.


> This is the furthest thing from a null hypothesis

"It is difficult to get a man to understand something, when his salary depends on his not understanding it"

Q: Why would one trust initial BLS jobs figures under this - or indeed any other - administration?

> This is dressing your conscious biases in sciensism

BLS figures being revised downward month after month after month is data, not bias.


No, thats anecdotes.

Actual data would be measuring predictions vs accuracy over several decades.


Maybe you don’t understand the role of the BLS or what it does. Maybe you’ve been sold a bill of goods that it is supposed to be an infallible oracle, when it is, in fact, a useful measurement device with limitations that have been well known for decades.


[flagged]


Because the margin of error is like sub 0.01% or something?


because they're asked to always have rosier numbers. They've done this forever.


And it is telling that a certain flavor of partisan is finds it novel.

"No Matter Who Is President, Don’t Trust Government Data"

>There isn’t clear, undeniable evidence that officials at the BLS are editing or making up the jobs numbers that go out to the public. But it’s easy to see why people think that they are when you look back at the series of dramatic downward revisions the Bureau has made in recent years—especially during Biden’s presidency.

>The monthly jobs report is a recurring, previously scheduled drop that all major media outlets publish immediately. And whenever the headline number is dramatically large or at all higher than expectations, White House officials are quick to seize on the news to frame it as a consequence of their brilliant economic agenda. However, when these jobs figures consistently get revised at a later date—ostensibly due to new information—the revisions are rarely given the same level of attention by the media and are therefore only really noticed by the small subset of the population that is closely monitoring economic data.

>So, as an example, the Biden administration was able to loudly celebrate BLS reports showing dramatic job growth month after month. And when almost all of that growth was revised away in future reports, very few people noticed. The consistently inaccurate jobs reports gave the public the false impression that the economy was booming. The fact that this was due to the same kind of mistake apparently being made over and over again struck many as suspicious. And rightfully so.

>It is important to note, however, that this has continued after Biden left office. So if the BLS really was propping up initial jobs reports to make the economy look stronger under Biden, then by every meaningful indication, they have done the same thing under Trump—at least so far.

https://mises.org/mises-wire/no-matter-who-president-dont-tr...


They do it in Canada too, so much so, that people have gone on to call it a random number generator. I've never seen it revised upwards.


I spent a few weeks trying to build an alternative to self attention that scales memory linearly. I I got surprisingly good results. While in principle this makes a lot of sense, I am struggling to push the test accuracy above 86%.

Some of the alternatives I am about to consider:

1. Diffusion with sparse attention layers. 2. Hierarchical diffusion - next token diffusion combined with higher order chunk diffusion.

Still figuring out the code and I would love any feedback on these approaches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: