Hacker Newsnew | past | comments | ask | show | jobs | submit | frrae's commentslogin

It's worst for early stage, but applies to all companies except one-man-shows.


I believe in this entire thread the attitude of "just do your job well and exceed expectations a few times a year" is very under-represented. And I've met many employers who have it.


You are indirectly making a great case for requiring that applicants consider the company more that "just a job". If my coworkers ranted like that, I'd not have fun at my work and you'd be dragging morale down.

I spend 8-10 hours a day at my job. That's far more than with any of my friends and typically more than even with my spouse in the average week. My coworkers, at least a significant fraction, better be amazing. If somebody comes and basically says, dude, it's just a job, leave me alone, I leave them to their misery. If I am forced to work with them any significant amount of time, I quit an go work somewhere else where I can build meaningful connections with people around me. Cause let's face it, co-workers are a friends substitute in this day and age.


> dude, it's just a job, leave me alone, I leave them to their misery.

If you really believed what you were saying you wouldn’t have to rationalize it to yourself by claiming that people who don’t think like you must be miserable.

> Cause let's face it, co-workers are a friends substitute in this day and age.

That’s on you. If you’re using coworkers as a friends substitute it’s not surprising you get people who want you to leave them alone.


> What's the purpose of sending a CV when I have to do all the hard work?

The article we are discussing here claims that there already is an asymmetry in the sense that the applicant has to put time into polishing a resume while the company does basically nothing.

So, you are going even further by essentially requiring the applicant to first write an open source library in the space before they are interesting?


Hey, I've done that :)

(well, at least contributing to an open-source library in the space to get an idea of the domain while applying)


GPT Is Not A.I.

We tech people should actively go on the offence and educate whomever we can that text inference is not intelligence.


Isn't this just the AI effect? Whenever there is a breakthrough in AI research, it's no longer considered AI. This happened with search algorithms, game playing, speech recognition, computer vision, etc. etc.

https://en.wikipedia.org/wiki/AI_effect

Maybe as "tech people" we should give the public a realistic picture of what AI research is. It's solving problems using a diverse set of techniques that include search, optimization, planning, learning, and knowledge representation. Saying that the current developments aren't AI is simply wrong.


> Saying that the current developments aren't AI is simply wrong.

Thank you for illustrating my point so well. What you are expressing is exactly what I'm talking about: The redefinition of what the I in AI stands for. Inferring some text is not intelligence. Ergo it's not AI what GPT is doing. The industry has redefined the terms and that's for good reason, to get the public hyped up and thereby open the money gates. But that's for the business people. Assuming that "we" in this audience are the tech people, we should push back against this and stop calling things something they are not.


The AI effect is actually: Whenever there is a breakthrough in AI research, the AI researchers will consider it AI and lament that others don't agree.

That wikipedia article is just from the AI researchers point of view, it isn't what actually happens.


I never heard that one. It's pretty much true. Is it interesting though? But it's totally compatible with the other "AI effect", which also seems pretty much true in my observation.


Humans are constantly moving the goal posts for what constitutes AGI.

Based on GPT 4's performance on standardized tests and my anecdotal evidence of utility I'm getting out of ChatGPT, I feel that weak AGI arrived with GPT-4.


This is my take as well. It's still weak, and the more I use it the more I can see where it's just not quite there yet, but yeah - GPT 4 is clearly something new. Just spend any significant period of time with it trying to automate your own tasks/workflows, it is borderline kinda sorta magical, with a heavy emphasis on "borderline kinda sorta".


This is like trying to get people to call Linux "GNU Linux".

While it might be a more accurate name, no one but a tiny minority is going to care, and even fewer will actually change the way they refer to it.


Very true.

Still, I feel that people who do know better have a moral obligation to set this straight.


I'm still in the camp that anything with more than 1 if-else statement is AI.


You don't own the definition of AI. Whether LLMs are intelligent or just pretending to be doesn't matter to many people, and it's not your place to tell them their opinion is wrong.


Communication is based on shared understanding of what things mean. It's not me who has defined intelligence. Go look it up. None of the common definitions is matched by GPTs observable behavior, if you take confirmation bias and athropomorphizing out of the equation.


By your definition of intelligence. By my definition of intelligence it is, but then I consider any system that displays goal oriented behaviour and an ability to react to changes in its' environment to be at least minimally intelligent. What is your definition?


You say that, but if I'm confused about something and think hard about it, I think in language. If you blinded me, paralyzed me, deafened me and desensitized my olfactions, I could still think, but what I would be doing is feeding one language thought into another. It's not so much different from "text" imho.


yes, but for you all of that text is associated with ideas. The word "dog" has an associated object. For a machine like GPT-4, the word "dog" has no meaning or object, but it does have an associated likelihood for adjacent words. The words themselves aren't the intelligence, the ideas behind them are.


It's an open question the extent to which language models can learn the underlying meanings of words.

There was a recent podcast with Sean Carroll interviewing Raphaël Millière where they go into this topic and some of the research on it. Two examples I can remember are: 1) DALL-E had subject-specific, domain-specific neurons, 2) language models' representations of color terms encoded the geometry of the underlying color space, e.g. vectors in RGB space.

https://www.preposterousuniverse.com/podcast/2023/03/20/230-...

I don't think we should be too quick to assume how these models work. There's a lot that even the researchers don't know and these are empirical questions that can be studied.


Before all this Machine Learning madness, while I was studying unsupervised learning I had the following thought:

To cluster is to create an ideia, a concept which encompasses a set of data points. A cluster might be bigger than the set of points with which was created and intersect with other clusters. This is still what I visualize when I see image models interpolating between "concepts".

Believing that all an LLM does is the likelihood of adjacent words is probably an oversimplification. My verdict about this topic is: I don't know and I'm totally confused about what interpolation and extrapolation mean in higher dimensions


This sparks something I have always been fascinated by, hopefully someone can illuminate this for me.

If someone was born with profound blindness, or deafness or any profound lack of sense - what do they see in their heads? Or hear? I hear a version of my voice in my head, but some people have no inner dialogue at all, or even no ability for inner imagery.

What does the brain do in these situations?


I feel like in a few months this human exceptionalism will be proven wrong by construction.


Perhaps, but i think there's just as much hubris in saying "We've created something smarter then us" as there is in saying "We're smarter then the thing we've created".

meanwhile, we've created the worlds most complicated set of dominos, and we're delighting in knocking them over.


Here's another GPT-4 transcript, inspired by https://www.ling.upenn.edu/~beatrice/110/docs/uncleftish_beh.... In this case, not only it had to use invented words, but it had to invent them itself first.

--- input ---

Tell me about the structure of the solar system.

I want you to only use English words of Germanic origin; avoid using words derived from Latin or Greek. If there's no alternative, make up a new word by calquing the original into Germanic roots.

--- output ---

The sunstead is the sun and all things that go around it. In the middle is the Sun, a big, hot ball of glowing gas. Around the Sun are eight big worldrocks. From nearest to farthest, they are:

Mirkworld - The nearest worldrock to the Sun, small and fast. Lovegoddessworld - The hottest worldrock, with thick air and no water. Earth - Our home, the only worldrock known to have life. Redworld - A red, rocky worldrock with thin air and high mounts. Greatthundergodworld - The biggest worldrock, with many fellowtravellers and a big red spot. Harvestgodworld - A gas worldrock with a thin ringstead around it. Skygodworld - An iceworldrock that lies on its side, with a thin ringstead. Seagodworld - The farthest iceworldrock, with strong winds and blue clouds.

Besides the worldrocks, there are smaller rockbits and icebits that also go around the Sun. The rockbits are mostly in the Rockbitbelt between the Redworld and Greatthundergodworld. The icebits come from the far edges of the sunstead, in the long way out Icyrockcloud and the faraway Cloud of All.

The sunstead has a flat shape, with most things going around the Sun on near level paths. The worldrocks go around the Sun in a way called nearround paths. The Sun's pull keeps everything in the sunstead together.

Beyond the worldrocks, rockbits, and icebits, the sunstead also has spacegrit, small bits of rock and metal that fly through space. When spacegrit enters Earth's air and burns up, it makes a lightstreak that we can see in the night sky. If a bit of spacegrit reaches Earth's ground, it is called a starstone.

The Sun, which makes up most of the sunstead's weight, sends out a flow of charged bits that spreads far into space. This solar wind forms a wide bubble, the Sunshield, that surrounds the sunstead and helps shield it from high-speed bits coming from outside.

The sunstead's workings are steered by the pull between the Sun and the things that go around it. The worldrocks and fellowtravellers tug on each other, leading to small changes in their paths over time. The Sun's pull is strongest near it, so worldrocks closer to the Sun go around faster than those farther out.

As the Sun ages, it will go through changes that will have big outcomes for the sunstead. In about 5 billion years, the Sun will bloat into a Red Greatstar, swallowing Mirkworld, Lovegoddessworld, and maybe even Earth. Afterward, the Sun will shed its outer layers and shrink into a White Dwarf, a small, hot, dim star.

Lastly, the sunstead is but one of many star systems in our Starset, the Milky Road. The Milky Road is a wide, spiraling set of billions of stars, gas, and dust, and it is just one of countless starsets that make up the great Allthing.


This is so cool.

Allthing, Starset, Sunstead, Worldrock, fellowtravelers, starstones, nearround paths.

I would've like that chatgpt came up with better names for the planets Venus, Jupiter, Saturn, Uranus and Neptune. Mirkworld, Earth and Redworld were just fine.


The following is an actual transcript of a conversation with GPT-4.

--- input ---

I want you to replace the word "right" in your output thereafter as follows: if it indicates direction, say "durgh; if it indicates being near or close, say "nolpi"; if it indicates correctness, say "ceza". I will also use these replacement words accordingly and expect you to be able to understand them.

Write a 20-sentence story that showcases these rules. It must contain no word "right", but each replacement word must be used at least 3 times in a proper context. You must not use the replacement words for names and other things unrelated to their meaning.

--- output ---

Once upon a time, in a small village nestled deep in the forest, lived a young boy named Timmy. He was a curious and adventurous child, always exploring the durgh side of the woods, despite his mother's warnings. One sunny day, as Timmy ventured durghward, he stumbled upon a mysterious cave.

Feeling a mix of excitement and fear, Timmy hesitated for a moment before deciding to enter the cave. As he took his first step inside, a soft voice echoed from the nolpi darkness. "You have entered my domain, young one," said the voice. Timmy was surprised but felt a strange sense of ceza in the voice, so he continued deeper into the cave.

The walls of the cave were adorned with ancient drawings that seemed to tell a story of a time long past. Timmy noticed that some of the drawings were marked with a peculiar symbol, which he realized indicated the durgh direction. Intrigued, he followed the symbols, each one leading him nolpi to the heart of the cave.

Finally, Timmy arrived at a massive chamber with a shimmering pool of water in its center. Standing beside the pool was an elderly woman, who introduced herself as the guardian of the cave. "You have come a long way, young one," she said with a gentle smile. "I can see the ceza in your heart, so I will grant you one wish."

Timmy thought hard about what he wanted most in the world. He knew that his village was suffering from a terrible drought, and their crops were withering away. So, he wished for rain to fall upon his village and restore their crops to their former glory. The guardian nodded in ceza, and the pool of water began to ripple.

As Timmy left the cave and ventured back towards his village, he noticed dark clouds forming in the sky. Soon, a gentle rain began to fall, and Timmy could see the ceza in his decision. The villagers rejoiced as their crops flourished once again, and life in the village returned to normal.

However, Timmy knew that his adventure was far from over. The guardian of the cave had shown him that there was much more to the world than what lay on the durgh side of the woods. He vowed to explore even further and discover the secrets that the world held.

With a newfound sense of purpose, Timmy embarked on countless adventures, always remembering the lessons he had learned in the cave. He knew that by following his heart and trusting in his instincts, he would always find ceza in his decisions.

Throughout his journeys, Timmy became a legend in his own durgh, and his name was spoken with reverence by those who heard his tale. And although he traveled far and wide, he always returned to the village that he loved so dearly, never forgetting the friends and family who were always nolpi to his heart.

---

As you can see, it's perfectly capable of using invented words - which are not a part of the training set, and thus have no "associated likelihood" of any adjacent words - according to their designated meaning, i.e. "the ideas that they represent). It does occasionally make mistakes, but way fewer than if it were using them randomly (this is easy to see by using a less powerful model - e.g. GPT-3.5 replacements are mostly random). Not only that, but if you ask it to double-check the text it wrote without pointing out specific mistakes, it can usually spot them.


> The nearest worldrock to the Sun

What a delightfully whimsical solution to not being able to use planet


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: