Management greed, stupidity, and self serving is perennial. Nothing new there.
Nassim Nicholas Taleb said in “The Black Swan” that he thought one of the unrecognized strengths of stock-market-based economies was that as publicly traded companies grow and get older, they tend to become bloated and incapable, and lose money and eventually die; and this represents a mechanism for redistributing wealth away from the investing classes (“the rich”) with some of the money making its way back into society as a whole.
IDK if that’s still true or ever was, but he was extremely successful working in finance; he wasn’t just some idiot saying his opinions.
Doesn’t work as well these days when everything is too big to fail and gets bailed out, instead of letting the economy endure the destruction part of creative destruction.
So glad we voted that one in! 😍 /S
This is kind of like saying in war, old weapons getting phased out reduces violence. While I am relieved I don’t have to worry about musket ball injuries, the new weapons are more effective at what they are designed to do and the people driving the need for newer and more effective weapons have not fundamentally changed in their motivations.
The businesses themselves aren’t driving the economy or how the megarich really make their money. Businesses are only the tools used by what’s actually driving the economy which is Capital. The same Capitalists which drive businesses to behave ruthlessly in a marketplace, grow rapidly, and ultimately collapse under their own weight will simply reinvest in an entity which will competently bring in a return on investment. The only redistribution of wealth happening is Capital investment being diverted to other tools of Capitalism. Capitalists don’t care which businesses or industry they’re investing in, they only care about maximising the return on their investment and using their influence to ensure that happens as much as possible.
When I think of historical wealth distribution which has had major impact on the lives of regular people, I can’t think of any which were caused by an outdated business clearing up some room in the market for newer and more lucrative capital investments to take its place. I have seen it through government action though.
The investment class realized it’s way more profitable to cellar box a struggling company and that you can short sell the stock and never have to pay up when the company goes bankrupt. Free money!
Yeah, I think they’re getting better at recapturing all the useful flesh any time that happens, now.
with some of the money making its way back into society as a whole.
I’d laugh, but it hurts too much.
In the article, it even reinforces this with a story about a fake livestream from the 90’s or early 2000’s.
Can’t wait for a story from a developer or sysadmin that knows how all the duct tape is held together, gets laid off and refuses to come back to fix everything. Then the former employer doubles doubt and threatens to sue them for loss of revenue. It would be absurd but I expect the absurd now.
Pah! I can fail at my job much faster than any goshdarn AI!
I can improvise new ways to fail at my job, and do so without prompting! I am truly Generally Unintelligent.
Haha I posted the same but you beat me to it, I removed mine <3
Another idiot writer missing how AI works… along with every other automation and productivity increase.
I literally automate jobs for a living.
My job isn’t to eliminate the role of every staff member in a department, it’s to take the headcount from 40 to 20 while having the remaining 20 be able to produce the same results. I’ve successfully done this dozens of times in my careers, and generative AI is now just another tool we can use to get that number down a little bit lower or more easily than we could before.
Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.
Cory Doctorow is an idiot writer? Do you know of him and you’ve reached this conclusion, or you don’t know who he is and just throwing shade?
I am curious. How much follow-up do you do after your automations 1 year later to see how the profit and loss picture of the department has worked out after your work is done?
(Not that that’s the point; I think you’ll get very little sympathy here for “I help the already-rich to keep more of the productive output of the world and make sure workers keep less” even if you can make an argument that you can do it effectively.)
I’ve been following Doctorow for decades now (BoingBoing) and yes, he’s an idiot in this situation.
I’m still working with the organizations I started automating for more than a decade ago. I’m sitting in the office of one of them right now. It’s worked out great, nobody is complaining about the fact that this office space now has people at separated desks instead of crunched together like they were when I started. If it makes you feel any better, I almost exclusively do this for government and public organizations (I’m at a post-secondary education institution right now) though I really don’t care.
Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.
You’re still working simultaneously with dozens of different organizations? Maybe I’m misunderstanding something.
Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.
Like a lot of things, the devil is in the details. Almost everyone’s firsthand experience with consultants coming in and enacting “efficiency” is that it’s bad for both the employees obviously, but also bad for the business. I’m not saying that’s the impact of what you’re doing, just what most people’s experience is going to be.
So there’s a central question in AI: Once the machines can do everything for us, does that mean everyone eats for free? Or no one eats? What would your answer to that question be?
As AI gets better we will need UBI.
That’s what I’m saying and people call me radicalised for that 😅
No, I have worked with a dozen or so organizations, but I’ve done multiple jobs for each. I’m a freelancer.
As for your second question, I’d like to see a basic income implemented for all citizens in my country. I’ve talked to my local politicians about it multiple times. It’s something that people now know about, which is good progress in my opinion. I don’t expect it to happen soon, but hopefully we’ll get there before we start to have too many social problems.
does that mean everyone eats for free? Or no one eats?
Yes.
Everyone eats for free… but the machines don’t need to eat, so why produce any food at all. That soy and corn getting used to feed caged animals for human consumption? Turn it into biofuel to power the machines. The hordes of hungry protestors?.. more biofuel.
I sat in a room of probably 400 engineers last spring and they all laughed and jeered when the presenter asked if AI could replace them. With the right framework and dataset, ML almost certainly could replace about 2/3 of the people there; I know the work they do (I’m one of them) and the bulk of my time is spent recreating documentation using 2-3 computer programs to facilitate calculations and looking up and applying manufacturer’s data to the situation. Mine is an industry of high repeatability and the human judgement part is, at most, 10% of the job.
Here’s the real problem. The people who will be fully automatable are those with less than 10 years experience. They’re the ones doing the day to day layout and design, and their work is monitored, guided, and checked by an experienced senior engineer to catch their mistakes. Replacing all of those people with AI will save a ton of money, right up until all of the senior engineers retire. In a system which maximizes corporate/partner profit, that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough), and yet there will still be a substantial fraction of oversight that will be needed. Unfortunately, ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight. That may not matter too much for marketing art departments, but for structural engineers it’s going to result in a safety or reliability issue for society as a whole. And since failures in my profession only occur in marginal situations (high loads - wind, snow, rain, mass gatherings) my suspicion is that it will be decades before we really find out that we’ve been whistling through the graveyard.
Yeah. This is something that to me isn’t getting enough attention in the whole conversation. I’m trying to get myself up to speed on how to code effectively with AI tools, but I feel like understanding the code at a deep level is required in order to be able to do that effectively.
In the future, I think the “earning” that gives you that type of knowledge won’t be something that people are forced to go through anymore, because AI can do the simple stuff for them, and so the inevitable result is that very few people will be able to do more than rely on the AI tools to either get it right or not, because they don’t understand the underlying systems. I’m honestly not sure what future is in store a couple generations from now other than most people being forced to trust the AI (whatever its capabilities or incapabilities are at that point). That doesn’t sound like a good scenario.
The future is already here. This will sound like some old man yelling at clouds, but the tools available for advanced structural design (automatic environmental loading, finite element modeling) are used by young engineers as magical black boxes which spit out answers. That’s little different than 30 years ago when the generation before me would complain that calculators, unlike sliderules, were so disconnected from the problem that you could put in two numbers, hit the wrong operation, and get a non-sensical answer but believe it to be correct because the calculator told you so.
This evolution is no different, it’s just that the process of design (wither programming or structures or medical evaluation) will be further along before someone realizes that everything that’s being offered is utter shit. I’m actually excited about the prospect of AI/ML, but it still needs to be handled like a tool. Modern machinery can do amazing things faster, and with higher precision, than hand tools - but when things go sideways they can also destroy things much quicker and with far greater damage.
old man yelling at clouds
My turn.
Almost 30 years ago, in sunny Spain, a friend of mine was studying to become an Electrical Engineer. Among the things he told me would be under his responsibility, would be approving the plans for industrial buildings. “So your curriculum includes some architecture?”, I asked. “No need”, he responded, “you just put the numbers into a program and it spits out all that’s needed”.
Fast forward to 2006, when an industrial hall in Poland, built by a Spanish company, and turned into a disco, succumbed under the weight of snow on its roof, killing 65 people.
Wonder if someone forgot to check the “it snows in winter” option… 🙄
The difference is that calculators are deterministic and correct. If you get a wrong answer, it is you that made the mistake.
LLMs will frequently output nonsense answers. If you get a wrong answer, it is probably the machine that made the mistake.
that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough)
Anything a human can be trained to do, a neural network can be trained to do.
Yes, there will be a lack of trained humans for those positions… but spinning up enough “senior engineers” will be as easy as moving a slider on a cloud computing interface… or remote API… done by whichever NN comes to replace the people from HR.
ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight
Cue in the humanoid robots.
Better yet: outsource the creation of “qualified oversight”, and just download/subscribe to some when needed.
Anything a human can be trained to do, a neural network can be trained to do.
Citation needed
Humans are neural networks… you can cite me on that.
(Notice I didn’t say anything about the complexity, structure, or fundamental functioning of a human neural network. All points to modern artificial NNs being somewhat on a tangent to humans… but also that there is some overlap already, and that it can be increased)
Humans are a lot more than the mathematical abstraction that is a neural network.
You could say that you believe that any computational task that a human brain can accomplish, a neural network can also accomplish (simply assuming that all of the higher-level structures, different parts of the brain allocated to particular tasks, the way it encodes and interacts with memories and absorbs new skills, variety of chemical signals which communicate more than a simple number 0 through 1 being sent through each neuron-to-neuron connection, is abstractable within the mathematical construct of a neural network in some doable way). But that’s (a) not at all obvious to me (b) not at all the same as simply asserting that we’ve got it all tackled now that we can do some great stuff with neural networks © not implying anything at all about how soon it’ll happen (i.e. could take 5 years, or 500, although my feeling is probably on the shorter side as well).
Artificial NNs are simulations (not “abstractions”) of animal, and human, neural networks… so, by definition, humans are not more than a neural network.
simple number 0 through 1
Not how it works.
Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.
Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.
Then, some big surprises came from a few realizations:
- You don’t need to simulate the neurons, just the relationship between inputs and outputs (each one can be seen as the level of some neurotransmitter in some synapse).
- A grid of values, can represent the connections of more neurons than you might think (most neurons are not connected to most others, the neurotransmitters don’t travel too far, they get reabsorbed, and so on).
- You don’t need to think “too much” about the structure of the network; add a few extra trillion connections to a relatively simple stack, and the network can start passing the Turing test.
- The values don’t need to be 16bit floats, NNs quantized to as little as 4bit (0 through 16) can still show pretty much the same behavior.
There are still a couple things to tackle:
- The lifetime of a neurotransmitter in a synapse.
- Neuroplasticity.
The first one is kind of getting solved by attention heads and self-reflection, but I’d imagine adding extra layers that “surface” deeper states into shallower ones, might be a closer approach.
The second one… right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a “bulk” kind of way… with surprisingly good results, but still.
Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.
Whether that’s going to take more or less than 5 years, it’s hard to say, but surely everyone is trying as hard as possible to make it less.
Then, imagine a “trainee” humanoid robot, with maybe 1000 accelerators of those, that once it trains a NN for whatever task, can be copied over to as many simple “worker” robots as needed. Imagine a company spending a few billion USD on training a wide range of those NNs, then offering a per-core subscription to other companies… at a fraction of the cost of similarly trained humans.
TL;DR: we haven’t seen nothing yet.
by definition, humans are not more than a neural network.
Imma stop you right there
What’s the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?
Remember, you said not more than a neural net – anything you add to the neural net to make that happen shouldn’t be needed, because humans can do it, and they’re not more than a neural net.
We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).
I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).
I’m assuming you’re being facetious. If not…well, you’re on the cutting edge of MBA learning.
There are still some things that just don’t get into books, or drawings, or written content. It’s one of the drawbacks humans have - we keep some things out our brains that just never make it to paper. I say this as someone who has encountered conditions in the field that have no literature on the effect. In the niches and corners of any practical field there are just a few people who do certain types of work, and some of them never write down their experiences. It’s frustrating as a human doing the work, but it would not necessarily be so to a ML assistant unless there is a new ability to understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge. More importantly, it needs the operators holding the purse to approve that expenditure, trusting that the ML output is correct and not asking it to extrapolate in lieu of testing. Will AI/ML be there in 20 years to pick up the slack and put it’s digital foot down stubbornly and point out that lives are at risk? Even as a proponent of ML/AI, I’m not convinced that kind of output is likley - or even desired by the owners and users of the technology.
I think AI/ML can reduce errors and save lives. I also think it is limited in the scope of risk assessment where there are no documented conditions on which to extrapolate failure mechanisms. Heck, humans are bad at that, too - but maybe more cautious/less confident and aware of such caution/confidence. At least for the foreseeable future.
we keep some things out our brains that just never make it to paper
ISO 9001 would like to talk to all those people and have them either document, or see the door. Not really cutting edge, more of a basic business certification to even dream about bidding for any government related project (then, people still lie and don’t keep everything documented… and shit happens, but such are people).
some of them never write down their experiences
Get a humanoid learning robot, you’ll have a log of everything it experienced at the end of the day, with exact timestamps, photos, and annotations.
understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge
Auto-GPT does it. The operator’s purse is why it doesn’t get used much more 😉
Anything a human can be trained to do, a neural network can be trained to do.
Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.
https://www.youtube.com/watch?v=wXxrmussq4E
Have you not been paying attention to robotics recently? Opening doors is a solved problem with consumer grade hardware and software at this point.
I wouldn’t say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.
Try getting them to even open a door
For now there is: AI vs. Stairs, you may need to wait for a future video for “AI vs. Doors” 🤷
BTW, that is a rudimentary neural network.
I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.
Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.
Moravec’s paradox holds true because of two fronts:
- Computing resources required
- Lack of formal description of a behavior
But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we’re brute-forcing away the lack of a formal description of the algorithm.
We’re now looking towards neuromorphic hardware on the trillion-“core” scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler… before you copy the first trained NN to an identical body and re-training costs drop to O(0)… which is much less than even training a million toddlers at once.
He has literal examples of head count increasing due to this use of ai, he’s not the idiot here.
Anecdote are not statistics.
Head counts increasing at one company are often offset by losses from their competitors as they take market share due to increased productivity.
The number of auto mechanics went up as the number of horse ranchers went down.
Lack of anecdotes and data definitely isn’t data.
There’s an argument to be made here but the OP hasn’t made it, just asserted a well written evidenced post is written by an idiot
As someone who works for a very large company, on a team with around 500 people around the world, this is what concerns me. Our team will not be 500 people in a few years, and if it is, it’s because usage of our product has grown substantially. We are buying heavily into AI, and yet people are buying it when our leadership teams claim it will not impact jobs.
Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.
Socially speaking, this is also very concerning to me. I’m afraid that implementation of AI will be yet another thing that makes it difficult for smaller businesses to compete in a global marketplace. Yes, a tech-minded company can leverage a smaller head count into more capabilities, but this typically requires more expensive and limiting turnkey solutions, or major investment into developers of a customized solution.
I honestly have no idea what the solution is. To me the issue is that with technology where it is, only about 20% of us actually have to do any work to keep all the wheels turning and provide for everyone. So far, in the western world, the solution has been to occupy people with increasingly-bullshit jobs (and, for some reason, not giving a lot of people who do the actual work enough to live on), but as technology keeps getting more and more powerful we’re more and more being faced with the limits of “you have to work to live” as a way to set things up.
The solution to both bullshit jobs and no life could have been to downscale work time not amount of people. If 20% of people is enough to do the job, maybe it’s better to keep everyone but let them work only 20% of time?
That won’t pass the shareholders’ vote, of course, because optimization must only mean “money optimization”
deleted by creator
In that case, the whole tech industry should, in solidarity, refuse to look for work and let the tech companies that just launched major layoffs feel the foolishness of their actions. Those tech workers need to wait long enough to allow Google, MSFT, Meta, Apple, etc. suffer the consequences of automation. If they managed this, when they finally do come crawling back, tech workers can get fat raises using this solidarity and collective action.
Pro tip: you can actually get organized in a union and strike just to get more money, no need for AI or getting fired. CEOs hate this trick!
tip: you can actually get organized in a union and strike just to get more money, no need for AI or getting fired. CEOs hate this trick!
✊🏼
They’d need to unionize first.
It’s expensive to live in tech communities. All the workers would need to move their families to somewhere more affordable and demand to work from home, on top of everything else, and they’d need to have enough savings to afford that. Right now, tech workers tend to carry debt, which is the bane of collective action.
Sort of. But people in society CAN act in solidarity. It’s obviously unlikely (something tech CEO’s calculated in these layoffs).
Obviously, capitalist exceptionalism is going to cause them not to do this. No one wants to loan their neighbor some money to weather a strike that WILL eventually lift ALL BOATS because of the whole “fuck you, got mine” vibe of EVERYONE in cutthroat capitalist societies. If I had the money, I’d certainly take part in this kind of collective action…and I’d also argue that many tech workers can because they were paid INCREDIBLY well in comparison to most trades…but you and I know they won’t.
I’m a member of a stagehand union that will NEED to strike during the summer (our busiest season) in order to gain some ground back from what price gouging, austerity, and inflation has taken from us. I can easily guess how likely the membership will be to endorse a strike when we will have been out of work for more than a year when negotiations start. That doesn’t make what I said less true; just about as unlikely as a third power coming to power in the United States two party electoral system.
deleted by creator
I agree with you but let’s not pretend that that’s not entirely BY DESIGN.
deleted by creator
The tech layoffs are not related to industry replacing those jobs with AI. Tech overhired and now they are adjusting. Simple as that.
Untrue. They overhired and were content to keep building up warchests of IP using those drones, then a billionaire wrote them a tersely worded letter and they responded by acting in solidarity (conspiring actually) to force austerity on famously well-compensated tech workers who they feel they can AT LEAST partially be replaced by AI.
Reading stuff like this is so crazy. They just go “hey 150k is a nice big number and should be plenty of heads, let’s just just take way the livelihoods of tens of thousands of actual people to make a number look nicer”
At least they didn’t choose 100k or 10k
/s partially
NPR suggests the tech layoffs of 2024 are just companies following the bandwagon and shareholders frenzying over it…
deleted by creator
Barely adequate, or even incompetent, is perfectly fine as long as they can make money. I’m not even sure a major AI fuckup would stop the adoption of these technologies, especially because they show no sign of slowing down development. Mechanical Turks have limitations. Sure, now they are cheaper, but as soon as a barely functional (but still functional) “AI” comes along that is cheaper to use, you can wave goodbye to the digital sweatshops.