It seems the job selects for those types. I suppose people interested in law enforcement / justice that aren't that way either end up as lawyers or working for the FBI or something.
If you don't have any kind of marketable skills yet want to make a decent living with plenty of benefits, becoming a LEO is the easiest choice for most people.
Or if you don't have any marketable skills yet have a spouse that has a job with health benefits, you can become a real estate agent.
Those two career paths seem to be the most chosen for almost all of the 'not so bright' folks I grew up with.
It's a use it or lose it skill. When you carry a badge and gun around and can bark orders at people all day and they have to comply or face the infinite violence you can summon with your radio your skin will grow thin over time.
Power corrupts, or some half baked version of that.
It seems to me that the market will select for urban sprawl though which is a negative for society but has the highest margin. E.g. Houston suburbs, miles and miles of cheap to fab single family homes that turn it into a suburban hell scape where you have to drive everywhere.
I don't think the free market is giving the promises you say it is - supply isn't elastic for real estate if nobody's building because there's no margins. Demand can be anywhere really.
I like to look to Tokyo for an example. Small lots, extremely predictable regulations (that are still strict enough to ensure a safe living situation), fast approvals, mean it's much faster and easier to throw up an 8-10 story apartment than say downtown Austin, and so even today they keep doing it despite land in Tokyo being very expensive. And, no sprawl.
I believe the market in the US selects for urban sprawl because it's usually subsidized by the dense urban core. Suburban areas often don't generate enough tax revenue to support their own infrastructure and services.
They can throw up 8-10 story apartments in Tokyo despite land being very expensive because they tear them down and rebuild them after 20-30 years. Also, Tokyo outside of a few areas isn't that tall, it is definitely dense, but 3-4 story tall building dense (those homes are also torn down and built anew ever 20-30 years, so construction buzzes in Tokyo).
It would be better if you considered new actual living capacity in Tokyo rather than just new constructions.
I've lived in Tokyo and Houston. Tokyo is infinitely more livable and the housing is still relatively affordable. In Houston even if you can get a new place it'll be clapboard garbage that's a one and a half hour drive from work.
Average home price in Tokyo is 1.5x that of Houston. Average wage in Houston is 1.5x that of Tokyo. So the data doesn't support your post unless you really really love sushi and arcades.
Maybe other real estate savvy people can help me understand this plus two other things I'm confused about in the housing crisis:
1. Houses are unaffordable for many Americans. To get houses to prices where they'd be affordable again would require a housing prices drop that would likely be, market-wide, significantly low enough to put a ton of people underwater on their mortgages. What is society/the government meant to do about that? Is it an insurmountable floor on how low we can get housing prices? That floor feels very close if so.
2. We've been promising the last five generations (or more) of Americans that a house is an Investment, capital I, an excellent place to keep your money. How do we overcome the political pressure to turn a house into a depreciating investment for the length of time required to get housing to be affordable again? What kind of politician would put their neck on the line to piss off every boomer and 75% of gen X and 30% of millennials, or whatever the house ownership distribution is?
There's a big difference between land prices and the building prices. When costs rise 5% per year for a house that's untouched, that's almost entirely the land price going up.
You can make housing cheaper by putting more houses on the same amount of land. In high cost areas, the price of land dominates the cost of housing.
Political pressure to change the investment nature of housing can come from various directions, for example establishing a land value tax, which eliminates the financial incentive to speculate on rising land prices by keeping people out of your area, redistributes all those unearned land rents to the population equally, as is only fair, and also results in a lot of people selling land to be redeveloped taht are otherwise hoarding it when the rest of society would be using it a lot better. Of course, in societies with high levels of land ownership, the voting public usually tries to vote away such extremely fair taxes.
Politically, we must stop prioritizing the views of homeowners at the local level. They already got their reward, massive unearned capital gains on their residence, there's no need to give them priority on land use over the general needs of society.
> Politically, we must stop prioritizing the views of homeowners at the local level. They already got their reward, massive unearned capital gains on their residence, there's no need to give them priority on land use over the general needs of society.
They are the majority of people in most areas, so it does make sense that they would be given priority in some ways.
The rest of your post is unsubstantiated vitriol, which isn't exactly convincing.
You quoted my vitriol to the homeowners, the rest is not vitriol, it's basic land economics.
> They are the majority of people in most areas, so it does make sense that they would be given priority in some ways.
In some ways sure. But in the ways that they are? Absolutely not, it's basic unfairness. The entire tax system is tilted in favor of home owners. We don't need to do that, we could make it more equal so that people with less wealth are not penalized.
It's not "basic land economics", it's your personal opinion about how things should be and whether you think current policies are fair.
The tax code does favor home ownership, because people want to support it. Less people will be able to afford their own home without that support, which seems to be the opposite of what you want.
High levels of home ownership combined with "local control" and "democracy" enables the "haves" who already own homes to weaponize government to keep supply low and home values high. Zoning restrictions, building codes, taxes, and other government tools are brought to bear to support this. The "have nots" don't have a chance.
Austin seems to be a counter-example when they "instituted an array of policy reforms" in 2015 that showed great results. Sadly the key may be appealing to the greed of existing homeowners. Changing zoning to allow tall apartment buildings where single family dwellings once stood lets existing home owners make even more money by selling than they'd make by continuing to restrict supply. While it's sad if that's the only path to success, we'll have to take small successes where we can find them.
I'd also rethink these questions under the assumption that incomes rise over time as the dollar reduces in purchasing power. The original premise was that due to inflation the cost you paid for a home would reduce your economic burden for housing. The slow and steady rise of inflation along with income would guarantee your loan to income ratio would improve.
The last few years have distorted this promise and I think some people have taken a more extreme view of the time window in the name of increased short-term profits.
All said the price you pay today being less of a burden over time was never meant to be a short-term profit motive in the discussion of homes as a economic safe haven.
From what I understand, in terms of genetic changes to intellectual abilities, there's not much evidence to suggest we're so much smarter that your proposed teleported baby would be noticeably stupider - at best they'd be on the tail of the bell curve, well within a normal distribution. Maybe if we teleported ten thousand babies, their bell curve would be slightly behind ours. Take a look at "wild children" for the very few examples we can find of modern humans developed without culture. Seems like above everything, our culture, society, and thus education is what makes us smart. And our incredibly high calorie food, of course.
That is exactly what civilization is about - for new generations to start not from scratch, but from some baseline their parents achieved (accumulated knowledge and culture). This allows new generations to push forward instead of retreading the same path.
it's impossible to prove the counterfactual (I guess, as I imagine we don't have enough gene information that far back). But I'd imagine that the high calorie food you can get starting with the advent of agriculture is exactly what could drive evolution in a certain direction that helps brains grow. We've had ~1000 generations since then, that should be enough for some change to happen. Our brains use up 20% of the body's energy. Do we know that this was already the case during the stone age?
The advent of agriculture did not provide better food, it was just the only solution to avoid extinction due to the lack of food.
The archaeological evidence shows that for many generations the first neolithic farmers had serious health problems in comparison with their ancestors. Therefore it is quite certain that they did not transition to agriculture willingly, but to avoid starvation.
Later, when the agriculturalists have displaced everywhere the hunter-gatherers, they did not succeed to do this because they were individually better fed or stronger or smarter, but only because there were much more of them.
The hunter-gatherers required very large territories from which to obtain enough food. For a given territory size, practicing agriculture could sustain a many times greater population, and this was its advantage.
The maximum human brain size had been reached hundreds of thousands years before the development of agriculture, and it regressed a little after that.
There is a theory, which I consider plausible, that the great increase in size of the human brain has been enabled by the fact that humans were able to extract bone marrow from bones, which provided both the high amount of calories and the long-chain fatty acids that are required for a big brain.
I've seen the bone marrow hypothesis also, which is very interesting. Afaik. evidence shows at least that there was enough specialization during neolithic era to have bone marrow cooks where the hunters delivered their bones. Something you wouldn't expect based on just school knowledge (at least back in 90s/2000s).
I see your point about agriculture at first degrading quality of food. Are you aware of evidence of brain size degrading even? Is it visible in the temple bones?
200k years just isn't much time for significant evolutionary changes considering the human population "reset" a couple times to very very small numbers.
Reich's lab actually found evidence of meaningful genetic changes that improved intelligence over the past 10,000 years, but not so much prior to that:
Converted to dollars, the value is far greater than the cost of a single bomb dropped on strangers that aren't a threat to me, so I don't need to justify it until someone can justify to me the bombs, the oil and gas subsidies, the bailouts, the...
My point is I don't want bombs dropped on strangers, so, in terms of things the government spends money on, there's nothing of less value to me that a single bomb on a stranger. Of all things the government spends its money on, I'd rather any one of those things to take 100% of the budget, than even a penny to go to dropping a bomb on a stranger, even if that significantly decreases my quality of life.
I just really don't like my government killing people far away that pose no threat to me.
To be fair the Google maps restaurant side of the operation is quite possibly the largest ratio I've ever seen between "amount of capital and engineering skill available" and "quality (lack thereof) of UX." You have to access your restaurant profile through the Google search portal. It's a nightmare.
> The richest person I know talks to robots all the time.
I've noticed this too, but I always thought of it as mostly people fooling themselves.
If you're rich (let's say anywhere above 10mil), it's practically guaranteed that you can allocate resources in such a way that more effective engineering, or science, or whatever, is done in less time than if you tried to do it yourself (rather than spending your time allocating capital). I've actually thought of this as a bit of a curse: the value of a rich person's labor output is inverse to their net worth. No matter how smart, you're not smarter than a crack team of Ukrainian/vietnamese/taiwanese/Indian scientists/engineers/whatever, and the more rich you get the more you can stack your crack teams, either paying higher salaries for higher skilled people or building bigger teams.
I think there's maybe 100 outliers to this rule in the world, people like John Carmack. I mean I assume he's rich.
I'm not sure that he doesn't like to, so much as that the position he ended up in as a result of the Oculus acquisition had no actual authority attached to it. He was functionally a glorifier adviser, to trot out at trade shows (and reading between the lines, this was a pretty frustrating position to end up in - he'd rather have had a real job, even if it was to build something he didn't fully agree with)
> It’s a standalone tool that lives outside the computer. I put the EEPROM into the socket, and connect via serial to my laptop to upload the binary files.
Huh, I guess I never really thought about it, but how did they program the first CPUs? Like how did they overcome the chicken/egg situation?
IIUC, that's what sci-fi LED panels of really old computers were. They showed all the internal statuses of the CPU as well as CPU-RAM bus. And the toggle switches allowed individual bit overrides.
The operator sets a CPU RESET switch to RESET, then powers on the machine, and start toggling RAM address and data switches, like HHLL HLLH HHHL LLLL. The operator then press and release the STEP push switch. The address 0b 1100 1001 is now set to 0b 1110 0000. This is repeated until the desired program or a bootloader is all complete. The operator finally sets CPU RESET to Normal, and CLOCK dial to RUN.
The CPU exits reset state, initializes program counter with reset vector, e.g. 0b1000, and start executing instruction at PC++. 1000, 1001, 1010, so on. Then oh no, the EXCEPTION indicator comes on, the LED shows 0b 1110 0000. That's divide r0 by 0, etc.
They didn't actually spend every half a day toggling those switches. They loaded their equivalents of bare minimum BIOS recovery code, then the rest wad loaded from magnetic or mechanical tapes. Only when computers were booted up blank slate or crashed and in need of debugging, the users resorted to that interface.
If they had the CPU-RAM main bus split into ROM and RAM address ranges in such ways that setting address to reset vector will yield the first byte of a BIOS program lithographically etched into the ROM chip, then simply powering on the machine will do the same thing as loading the BIOS manually.
There were also things like magnetic core memories. They didn't require lithography to fabricate, and there were both ROM and RAM kinds of those.
Of course, if you have sufficiently simple input devices that could do DMA, then you can do something e.g. IBM 1401 did:
When the LOAD button on the 1402 Card Read-Punch is pressed, a
card is read into memory locations 001–080, a word mark is set
in location 001 to indicate that it is an executable instruction,
the word marks in locations 002-080 (if any) are cleared, and
execution starts with the instruction at location 001. [...] To
read subsequent cards, an explicit Read command (opcode 1) must
be executed as the last instruction on every card to get the new
card's contents into locations 001–080.
I imagine the additional wiring on that LOAD button must have been pretty small: the READ functionality already exists in the 1402 device, it also has an output signal that tells when the read is finished (so the 1401 Processing Unit knows when the Read command is done), so you just need to tie that signal into resetting the PC to 1 and then starting the clock.
Actual application code was hardwired, entered manually with switches and lights, or with punch cards. Later, when ICs were sufficiently advanced, mask-programmed ROMs/PLAs.
Electrically, essentially what happens in most mask ROMs, but as a circuit board that allowed you to solder in a diode or not in each bit location in order to specify a 1 or a 0.
When I was building embedded controllers with 6502 processors in the 1970s and 1980s we used UV erasable EPROMS and a programmer (my own design) with a ZIF socket built on an expansion board in an Apple ][. The prototype board also had a ZIF socket but the production boards would have ordinary DIL sockets, our production volume was too low to warrant ordering actual ROMS so we used the UV erasable ones and put a metallised sticker over the window.
All programming was done on the Apple in 6502 assembler. It took 45 minutes to assemble an 8kB rom image. This meant that you took extreme care to think about what the code was doing as assembling a new image was often the most time consuming part of the Edit-Assemble-Test loop.
For the microcode ROMs they can just be “hardwired” with a zillion simpler gates. This has the added benefit of supporting way higher clock speed. For my planned program ROM you would either have to input manually like the first computers, or use other things like punch cards or your computer would be again “hardwired” to load programs from some other media
He says that's for microcode ROMs though? As opposed to a user program written in machine code that you would use the CPU to execute. I don't believe ancient CPUs had microcode. Everything was implemented in hardware.
68K, System 360, Sperry 1100, and even the 'ACE' to name the great grand daddy of them all had microcode.
Technically the 6502 and the 6800/09 did not, they used a dedicated decoder that was closer to a statemachine than microcode, even though both were implemented in hardware.
None of the smaller CPUs had 'loadable' microcode, but plenty of the larger ones did.
CPU's microcode can be surprisingly simple: The CPU has bunch of internal signals, which activates certain parts of the CPU and the logic when to turn each signal on comes from reading bunch of input signals. The microcode can be just a memory where the input signals are the memory address and the output is the control signals.
It's just that at some point when it's all physically wired up in hardware as opposed to being stored in some form of memory I have difficulty thinking of it as code or a program. By the time you're rearranging wires to enter a "program" aren't you actually refactoring the CPU itself?
Anyway I feel like the answer to the chicken and egg problem originally posed is to point out that things used to be different. Tools such as text editors and compilers are merely modern syntactic sugar.
Part of the machinery is a cylinder that orchestrates various very low level operations this means that the Jacquard cards can specify a higher level operation. Exactly how sophisticated it is, or is supposed to be, I'm not sure.
And now that you've challenged me I can't remember where I saw this piece of information. Time for a quick web search.
Found something, I don't think this is where i saw it first but it will do:
"Later drawings (1858) depict a regularised grid layout.[18][19] Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.[7]"
I'm going off memory (of a book, not that I was alive in the 40s, ha) so grain of salt etc but I believe the very earliest (edit: electronic, digital) computers were literally rewired every time they need to be re-programmed.
reply