In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.
Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.
It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.
That's an optimistic take, a more pessimistic take is that this is a tactic to lock marketshare for Wing, Zipline, Amazon and stall investment in drone delivery services while production catches up.
edit: I'm speculating here that the supply chain wasn't already state-side for these players without knowing much about their business model
The first half is usually solid, the back half is, well, usually more opinionated/softer. Lots of interviews with professors who seek to have their opinions represented as facts or members of the public have their plight elevated as serious national policy concerns.
Sure there’s definitely a change in content but I don’t think it’s quite that bad. Tonight was capehart and brooks— who has never supported Trump even though he’s a conservative, so not a great foil for capehart… Pretty soft/polite analysis that always feels very late-aughts. Yesterday was someone who worked in the state department for 25 years giving a pretty dry breakdown on Venezuela. the night before that was a professor from Tulane criticizing trump’s strategy on Venezuela. The night before that was an interview with Bill Cassidy explaining the GOP health care proposal he co-authored, and a report from someone embedded with the Lebanese army. I wouldn’t exactly say it’s like a rehash of the conversation at the campus coffee shop over there.
Probably best to dissect a specimen. I guess really the guy's just hocking his book here, but it's vacuous and packed with opinions and pessimism, and really not particularly high quality journalism.
For example, I disagree with the opinion that LLMs can't be a free lunch, or at least can't be CAPEX instead of OPEX, which Reich doesn't realize in the stated opinion.
I had to go back pretty far to find a professor, specifically, the first few were social outreach or labor organizers.
Your claim was professors want their opinions to be considered fact.
Promoting a book doesn't do that. Having opinions is normal and what we are talking about. Whether the person is pessimistic has no relevance here and I would like to know why you presented that as evidence.
It's a national federally funded organization and they want to chat on about justice and fairness, literally asking in order "how does this effect diversity? oh. How about equity? oh. how about inclusion?", and it's such a surprise that it costs a trillion dollars to not plop a choo-choo from LA to SF when everyone "feels like it"? It's gross, it's gross to me. Stick to the news.
I assume by your rant you don't have the evidence I requested and your claims a more likely based on your political views and not reality.
What's disturbing is that you're probably an engineer, like you know how to open PRs but also think the 2020 election was stolen. Maybe that explains why software has bugs
Yeah, we're opining on a segment that I opined is excessively opinionated (i.e., opinions are confidently stated so as to be represented as facts, "half of teachers are using LLMs") but when you look, the "study" is just a bunch of opinion polls. So yeah it is, in the literal sense, the professor's opinion being represented as facts, thank you have a nice day.
How? Because they stated their opinion and they think they're right?
As opposed to having an opinion you think is wrong?
>half of teachers are using LLM
This is their opinion based on a study that polled teachers? How is this unreasonable?
Determining popularity by polling makes complete sense.
You're just anti intellectual for political reasons. Also supporting Trump while not liking people who are opinionated and overly confident makes you a hypocrite
I mean this is just one case, I didn't cherry pick this, I peeked at a few previous episodes to find an episode where there was indeed a professor for the feature interview.
It's uninteresting because it's basically become a platform for regulatory capture. It's a wellspring of obviously non-universal ideas like, "there is no right way to integrate AI and primary education", "the federal government should subsidize ai access", or "only safe ai platforms should be permitted". I mean it's obviously their right to blather incessantly about it, I just think it's boring, and that's all I've said.
Maybe it's because I'm not a politician or a philanthropist, and I'm not required to tailor my actions to appease a large number of people subject to my will, but there's obviously better ways to approach that, like delegating and talking to people, who are local to the concern.
It's a nuanced and long term discussion and I think lots of the stuff that winds up in these interviews is really a local issue that's going into the wrong channel by well-meaning folks who don't understand government, or worse folks who are seeking to exploit government for profit.
And concretely, the interview doesn't focus on the book or the study, it's literally just an authoritative "intersectional" quiz about how AI/Education crosses with Diversity, Equity, and Inclusion,... a dumb question.
> it's literally just an authoritative "intersectional" quiz about how AI/Education crosses with Diversity, Equity, and Inclusion,... a dumb question.
What's an "authoritative intersectional quiz"?
>Maybe it's because I'm not a politician or a philanthropist..
Your accusations were about professors so why are you bringing up politicians. Also a philanthropist doesn't have people under his control.
>...delegating and talking to people, who are local to the concern.... lots of the stuff that winds up in these interviews is really a local issue that's going into the wrong channel
What's a local issue that shouldn't be discussed on PBS? You were just discussing AI which isn't a local issue.
>because it's basically become a platform for regulatory capture
How? You used so many buzz words I believe you either used chatgpt to generate the response or you're a bot
Did you watch the YouTube timestamp? Do you know the difference between audience and subject? Do you know that we don't all live in primary school? Can you list any 7 buzz words I've raised that didn't come immediately from the YouTube timestamp?
This user must be a bot check the comment he replied to be me higher in the thread. It almost looks like a valid response but actually jumps around to different issues right wing people have about the news.
As an avid and long term PBS viewer, donor, news hour west was 90% a waste of time anyway. Most evenings it is virtually the same broadcast, same segments. Media is more VOD-oriented anyway. They have been posting both broadcasts to YouTube for years, so you can assess this if you'd like.
The exception is if there's something notable to report on between 5PM and 8PM EST
tokio default behavior within a task is to ignore panics, such as an Err/None unwrap, and only crash that task, so it's impact limited so that's nice, maybe that's where the snowblindness came from.
it'd be kinda hard to amend the clippy lints to ignore coroutine unwraps but still pipe up on system ones. i guess.
edit: i think they'd have to be "solely-task-color-flavored" so definitely probably not trivial to infer
> especially knowing you are actually never going to work on such problems as part of an actual job
Actually, unwittingly, problem solving being a common organizational behavior, and most algorithms being a blueprint for optimal problem solving; maybe get curious and shed the incline that they are merely academic?
Anyways, I did this, myself recently. I picked up CLRS and read it, omitting or skimming the proofy sections, opting to focus on design and intuition, which is troublesome when they begin to overlap. I hope to revisit it. It's a nice space to be in, a blissful stroll through pedagogy, little history lessons, easy stuff.
As I progressed through the readings I worked a healthy number of problems. Lots of struggle and pain working exercises, opting to avoid hints or shortcuts and spend hours and hours to internalize. This part stings. No one likes to bathe in the lather of their own ignorance, but it can be done.
"it came to me as a microsleep misinterpretation of a radio news broadcast" feels like the most tech thing, to me. this is always where the best ideas seem to come from, a misinterpretation that gets fixed up into an enhancement.
As a musician, this happens to me as well. If I hear a piece of music in a noisy environment my brain will fill in the gaps. I’ll think, “Wow, this music is really interesting.”
Then I’ll hear it in a quiet environment and realize that I preferred the version where my brain adding things to make up for what I couldn’t hear before.
I guess it’s a little like diffusion. The brain has natural denoising processes which, in an unconscious way, taps into our tastes.
It's funny to me that the patch threads attempting to fix a concern with the CPU from the OS both immediately derail into "oh no, you broke something!!1"
the CPU is assumed to work, despite Linux being designed with portability, theres not really a programming interface to cleanly disable arbitrary features, I guess we just screw with it until it seems to work OK and no one is yelling.
Granted, Linux isnt a product, and a patch in a thread is unlikely to impact anyone
Optimistically because the component was considered self-contained, and done?
If you build things with wires, diodes, multiplexers, breakers, fuses and keyed connectors there's less maintenance needed than if you try and build a system entirely out of transistors and manually applied insulators.
I haven't looked at the package itself, but was it built on top of the C libraries with like, bindgen?
e: a glance suggests thats not the case, but perhaps they were ported naively by simply cloning the structure without looking at what it was implementing? that's definitely the path of least resistance for this type of thing. On top of that the spec itself is apparently in POSIX, some parts of which are, well, spotty; compared to RFCs
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
reply