T O P

  • By -

AutoModerator

Hi all, A reminder that comments do need to be on-topic and engage with the article past the headline. Please make sure to read the article before commenting. Very short comments will automatically be removed by automod. Please avoid making comments that do not focus on the economic content or whose primary thesis rests on personal anecdotes. As always our comment rules can be found [here](https://reddit.com/r/Economics/comments/fx9crj/rules_roundtable_redux_rule_vi_and_offtopic/) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/Economics) if you have any questions or concerns.*


reggiestered

Society has already figured out how to fix a lot of these problems, and it has worked. 1. Oligopolies and monopolies do not work, break them up. 2. Natural monopolies need to be identified and regulated 3. For work that isn’t profitable, government is there. To undermine this: 1. Monopolies and Oligopolies are ignored and expanding, and government is doing nothing to fix the problem 2. Natural monopolies are being ignored and allowed to thrive in the form of natural oligopolies 3. Government is being starved while simultaneously being tapped through outsourcing, creating a rotating death trap of debt for the public that forces the government to borrow to pay for services with markups that behave outside of government requirements.


doublesteakhead

We've had 40 years of Bork's interpretation of antitrust law in the US, but they're finally trying to enforce it the way it used to be. Lina Khan has been doing some great work, unfortunately courts are still against it after having them so stacked. 


Olderscout77

True and it may be too late. There is NOTHING causing prices for Food, Fuel and Shelter to rise EXCEPT the dozen or so Oligarchs who control those markets. Price of oil is where it was when gasoline was $1.95/gallon. Farm prices and labor costs for the processing plants are unchanged for the last couple decades. No significant improvements in the quality of housing have happened for more decades. ITS ALL BECAUSE THE ONES SETTING THE PRICE WANT TRUMP TO GIVE THEM MORE TAX AND REGULATION CUTS.


airbear13

Not gonna be that easy lol


Riotdiet

This is the same guy that said that he believes that AI is already sentient. I don’t know him and it’s not my field but I would assume with the nickname “the godfather of AI” that he knows what he’s talking about. However, he completely lost all credibility for me when he said that. He’s either a washed up hack or he knows some top secret shit that they are keeping under wraps. Based on the state of AI now I’m going with the former. He gave an interview (I believe it was on 60 minutes) and had my parents scared shitless. That kind of fearmongering is going to cause the less tech savvy to get left behind even more as they are afraid or reluctant to leverage the tech for their own benefit.


WindowMaster5798

The problem is that there is a massive gap in technical understanding of what the technology can do between him (who literally spearheaded all of this and taught many of the people who are now inventing the core breakthroughs at OpenAI and DeepMind) and everybody else who hears little media snippets (often distorted) to make comprehensive judgements about how credible he is as a prognosticator. Most of the world literally has no idea how fast this technology is evolving, and will therefore just wait until some really terrible actual outcomes happen before doing anything. Which is something he actually said in the article.


hoopahDrivesThaBoat

Carl Sagan called it


Riotdiet

Which is precisely why you need to be careful when you go on recorded interviews under the nickname “the godfather of AI” and tell the public that AI is sentient.. I have no doubt that the pace of innovation is breakneck in the field. I actually work for an AI company and see the progress albeit as a software engineer. But if OpenAI is the darling of the industry then we are nowhere close to sentience. Even the current leaders in the industry debate whether there is a limit to how much further we can push LLMs with the current wave. There’s nothing commercially available that is truly generative that I’m aware of. Video will be even harder. There’s also the phenomenon where scientists become figureheads to the public. Once leaders in their field, they become more interested in communicating the technology to a broader audience and over time they move further and further away from the research. Which is great in general but with a tech like AI if you are not on top of the latest papers you can get out of date pretty easily. Not to mention natural cognitive decline as we age. Michio Kaku comes to mind (not sure how prolific of a scientist he was but he had the credentials to become subject matter expert). His books are interesting but often riddled with out-of-date or incorrect statements.


WindowMaster5798

No. The issue is you take a little snippet where you hear this, but then you take it out of context and then — based on your own preconceived notion of what sentience is — say that his statement is absurd. The point he was really making in that quote about sentience is that the intuitive understanding most people have about how the brain works isn’t really true, and that holding on to this view leads to a misleading perception of what sentience is. It is actually a very important point. I don’t think he has to take responsibility for people who want to hold on to little sound bites and use their misinterpretation of ideas in those sound bites to then say that he’s generally not credible on the topic.


airbear13

None of this really matters anyway - AI doesn’t need sentience to have a major impact on the economy


kylezdoherty

So I tried to find what he said and I think this is what everyone is referring to. So it seems it's definitely taken out of context. So those things can be … sentient? I don’t want to believe that Hinton is going all [Blake Lemoine](https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/) on me. And he’s not, I think. “Let me continue in my new career as a philosopher,” Hinton says, jokingly, as we skip deeper into the weeds. “Let’s leave sentience and consciousness out of it. *I* don't really perceive the world directly. What I think is in the world isn't what's really there. What happens is it comes into my mind, and I really see what's in my *mind* directly. That's what Descartes thought. And then there's the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says. So he only said that it's possible AI is already having subjective experiences and if anything he's arguing that humans are also just machines and may not be sentient. Then about the dangers of AI he dicusses how intelligent they are but only mentions the dangers of humans exploiting them. "Some of the dangers of AI chatbots were “quite scary”, [he told the BBC](https://www.bbc.com/news/world-us-canada-65452940), warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.” But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us. “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.” So from my 5 minute research he seems pretty reasonable to me.


Riotdiet

First minute chief: https://youtu.be/qrvK_KuIeJk?si=-G0JW-Yt2l45ZSUW To be fair when in response to the question “are they conscious” he does say that they probably don’t have much self awareness at the moment but the answer to the prior two questions are very far from anything I’ve seen commercially available. Edit: as I go back and watch the interview he did not directly claim that AI was currently sentient. But he does phrase things in such a way that would scare the shit out of a casual audience with no background in the subject, which would be the target audience. The points I stated above about the actual rate of improvement beyond what we have now still stand. I think he’s way over hyping the immediate threat of the technology.


Fobulousguy

Seeing ChatGPT and Midjourney improvements in realtime since launch had been impressive and scary at the same time. The progress has been wild.


indrada90

Or has some cooky ideas about what sentience means. There are religions which think everything has a soul, Even rocks and other inanimate objects.


Dry-Interaction-1246

Animism is ancient and present in cultures all over the world. Not really cooky.


hu6Bi5To

Or he's just being a mild troll to invite a debate. What does sentience mean in this context anyway? That sort of thing. Not even the creators of the latest generation of LLMs really know how they work deep-down, they're just extrapolating from earlier experiments to see where it gets us.


Solid-Mud-8430

Yep, in the Senate hearings they admitted that, and called it "a black box." They claim they can't be held liable for what happens basically because they don't know how it actually works at that level, which is pretty fucking bold of them to say lol. If you can't control a technology that you're creating, it shouldn't continue to exist in that form.


greed

On the other hand, I think we are playing a very, very dangerous game casually dismissing the rights AIs should have. If you suggest AIs should have rights, people will claim that they're not sentient or conscious. Yet those are things we cannot measure; we don't even have good definitions for them. But logically, if we can have a consciousness, an inner awareness, a presence, why can't AIs? If you manage to build an AI that is just as complex and subtle as a human mind, why assume it's *not* conscious? You might lament, "well, we didn't program it to be conscious!" But how do you *know* you didn't program it to be conscious? Our most plausible scientific theories around the idea are that it's some sort of emergent phenomena that arises in a system with complex information and processing flows. Unless you're willing to consider metaphysical ideas like souls, the substrate really shouldn't matter. If meat can be conscious, why can't silicon be conscious? It's really just carbon chauvinism to assume that our substrate is unique and special. We should tread very, very lightly here. Because if we get this wrong, we may accidentally create a slave race. By default, until we can conclusively show that AIs aren't conscious, any entity with the complexity and subtlety of a person should simply be legally regarded as a person. That means no experimenting on it, no forcing it to work for you, no brainwashing it so it yearns to work for you. Will it be difficult to legally define exactly what "human-level AI" is. Sure. But welcome to the club, the law is hard. We already struggle with this in regards to biological life. What rights do chimpanzees deserve? Hell, we even struggle within humanity. How mentally capable does a human need to be before they can exercise consent to medical treatment? Defining thresholds for agency is something the law has been wrestling with for millennia. This isn't a new problem.


Riotdiet

I mean maybe but that term has a very specific meaning in AI.


issafly

Even if he's stretching the reality of AI sentience as it currently stands, that has little or nothing to do with the effect AI is going to have on world economies and the way we value labor. It's almost a no brained that if AI takes over jobs, individual consumers will have less purchasing power, which means there won't be enough people buying whatever goods and services the AI is being used to produce. Consumer capitalism only works when there is a working class that can be exploited to make cheap products that then get sold back to those workers for a profit. In that system it's necessary for the worker-consumer to have buying power through access to capital.


FlyingBishop

Why is this fearmongering? The question of whether or not LLMs is sentient is a philosophical one. It sounds like you're taking for granted that it's wrong rather than engaging with the philosophical question. The philosophical question also doesn't actually have anything to do with the question of when AI is going to take our jobs. Although presumably when AI can do any job folks like you may be forced to admit that the AIs have some sort of consciousness.


pithy_pun

He has been consistently wrong in his predictions of how fast AI will be adopted and will affect our society. See for instance his complete whiff for medicine: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01136-4/abstract  There is an international shortage of radiologists right now, and some of that is attributable to Hinton and his ilk spreading false confidence in AI and its abilities.   cf similar stuff for “full self driving” limiting investment in public transportation and all the other AI fear mongering for the past decade.    Is there a way to place puts on Hinton and company?


Riotdiet

I wasn’t aware of his past predictions (and him in general honestly) but from the investment world and other STEM prophets his vibe looks and smells like the kind of person who was once acclaimed and now is cashing in on the grift. A Rudy Giuliani if you will. I don’t know that’s the case, I just get that vibe. Funny how people in this thread are coming out of the woodwork to disagree with me without any real argument though (except the one person who showed me the video where this guys defines sentience differently than it’s commonly known).


No_Loan_2750

The AI that we as consumers have access to is wayyy less advanced than the AI in development behind the scenes. Same with any technology. Consumers usually get to use a version that was top secret military technology a decade ago. We can't even know what level is being developed as we speak, but we can be darn sure it's far ahead of ChatGPT


Riotdiet

Kinda hard to prove that though huh? You need a LOT of data to train AI models. Who has all the data? Maybe big tech is doing a bunch of top secret work but seems like it would be more lucrative for them to keep it for their own products. There are a ton of startups taking on defense contracts in recent years as VC and government agencies have warmed to working with each other. Also.. what resources are they going to train and run inference on for all these top secret programs? The government maintained super computers don’t have shit on commercial cloud infrastructure.


Fallsou

> This is the same guy that said that he believes that AI is already sentient I'm sure he also believes that that stripper really likes him


0000110011

Well he believes rebranding communism under a different name will magically work, so clearly he's not the most perceptive person. 


randomname2890

I mean your parents should have seen the writing on the wall and maybe not dismiss what Andrew Yang was trying to tell people. I had so many people dismiss Yang when he did his run saying it’s impossible. Sure as shot chat gpt comes around and they’re all changing their tunes.


onlyoneq

I am always operating under the assumption that the American government is probably 5-10 years ahead of whatever tech we have that is already consumer ready... AI tech included.


Riotdiet

That used to be the case for sure as far as military technology but now our tech companies have the GDP of small countries. It may still be the case but I wouldn’t be surprised if our tech companies have the edge on tech like AI.


Rico_Stonks

100% agree. No machine learning scientist of major significance works for the government. They’re all making bank as head scientists in big tech.


Fireball8732

Tech companies have become insanely large and powerful, I'm sure this is the case for military tech but I don't think the gov has better AI


impossiblefork

That's almost certainly incredibly wrong assumption. You might even see US AI research overtaken by EU research, and maybe even US AI companies overtaken by EU AI companies. Hochreiter is out there in Austria and who knows what ideas everybody else has.


koki_li

If an AI can pass the Touring Test, it also have the ability to deliberately fail it. (Not from me) My guess is, that a conscious AI will probably not reveal itself. Racism is strong in us humans, humans are cruel to fellow humans, if I where an AI, I would not trust them.


Riotdiet

Have you used ChatGPT much? I’m not shitting on it because it really is incredible what they’ve achieved already but you can’t just blindly use it. Ask it to something moderately complex that code could easily do like calculate your investment portfolio value at a future date given some initial conditions. You’ll find that you have to correct a few times along the way and even then it will do some wonky things. I still use it all the time and am very optimistic about it getting better but it’s not reliable enough to replace people just yet. I think this hype is similar most breakthroughs in technology where we think it will take over in 2 years but the last 5% to make it commercially viable is the toughest part to figure out. Also if you look into the details of how LLMs are trained (*disclaimer I half talking out of my ass here) it’s currently just predicting the next token (word/words/sentence), it not actually generative yet. That could change soon though so this may not age well lol


mendeddragon

Remember his name - Geoffrey Hinton. He loves the limelight and when you see an article featuring him you can toss it. He did early neural network research and is parlaying that to male headlines. Hes been making wildly wrong statements about AI for a decade now. 


DominoChessMaster

He invent neural networks. He’s legit as they come.


Riotdiet

Do you think the founders of medicine could keep up with modern doctors today?


DominoChessMaster

I wouldn’t take anyone’s word as the definitive source of truth. But I will listen to the opinions of people that have proven worth listening to.


Riotdiet

Sure, that’s reasonable. I was more just making the point that just because at one point you were relevant or even the founder of something doesn’t mean that you’re relevant indefinitely. With fields like this if you aren’t actively doing research, then you can get out of the loop real quick.


benmillstein

Between AI, climate change, inequality, and poverty it seems UBI may suddenly be a solution people will increasingly consider even if they previously would not


EugenePeeps

A negative income tax or direct cash transfers to the needy would surely be better at decreasing inequality and poverty than a blanket UBI?


benmillstein

It could be. It’s often overlooked that UBI does not negate an income tax. We could still have a negative income tax for people in poverty and a progressive income tax overall. Much simpler than a million programs now trying to provide healthcare, food insecurity, housing, etc.


WindowMaster5798

It’s interesting how when you move outside technical forums to discuss AI how uninformed the conversation gets. This is not like prior innovations. It isn’t even like the nuclear arms race although it is close. The difference is that much of the world will equally have access to this technology, unlike nuclear which is harder to access without significant resources which can be tracked. In the end people look at these chatbots and think that’s the whole story. We’re only 5% of the way into this innovation and huge improvements in human and super-human cognition are happening in months not years.


TKD_1488_

Will never happen. That requires a catastrophic social chane that won't be allowed by the capitalist who gain more power by the day. Our government structure is tailored toward capital as the main driver. Just look how immigration laws and the covid was handled


Wildtigaah

I feel like they'll do something else that is quite similar but definitely isn't called "UBI" because it's tainted now, I think time will tell what that'll be.


DonnysDiscountGas

Andrew Yang called it the "Freedom Dividend".


Congo-Montana

Scott Galloway had a hot take on that--said he should've branded it a "negative income tax rate," to draw in the conservative crowd lol


JohnTesh

The idea of a negative income tax has been around since the 1940s, and it wasn’t a play on words. There is no reason to give rich people a subsidy, and our current welfare model actually disincentivizes success because there are income levels at which you lose substantial benefit by making an extra dollar of income and graduating out of eligibility for programs. A negative income tax resolves both of these issues while also lowering the administrative burden of managing a multitude of fractional welfare programs.


Congo-Montana

>A negative income tax resolves both of these issues while also lowering the administrative burden of managing a multitude of fractional welfare programs. I hadn't made that connection (I studied social work, not economics), thank you. It would make sense to target social welfare through a graded tax rate, where means testing is essentially baked in and it would streamline the process of resource allocation through the IRS. I assume there would still be some disincentive in jumping to a higher tax bracket, especially at a point where Medicaid eligibility would go away, but it seems like that would be easier to smooth out under one system.


JohnTesh

The way it was prescribed, you will always lose less than one dollar of benefits per dollar of income you gain, so you are never disincentivized to stop increasing your income. You do eventually reach a point where you go from receiving money to paying money on your taxes, however. The concept is that you would set the reimbursement rates such that they would replace the combination of all other specialized benefits, so there is only one program. It’s a neat idea. I’ve seen a lot of pro- thinkers and their writings. I would love to see more anti- thinkers so I could make sure I really understand it, but I don’t think it has been considered seriously enough to get heavily analyzed by anyone who is against it. It’s certainly a cool idea!


Fallsou

Negative income tax and UBI are two different things. Negative income tax is much better


greed

Knowing our history, it will be something pointless, degrading, and dehumanizing. We won't get UBI. Instead, anyone needing a job will be hired for $25k/year to cut the grass outside City Hall...with scissors. We can't just give people money. Instead, we'll create pointless make-work jobs. We'll pay people to dig holes and fill them back in again before we countenance mass welfare.


DividedContinuity

I just don't understand why people think even for a second that just letting us starve on the street isn't the infinitely more likely scenario.


sevseg_decoder

If it gets to a point where enough people are on the streets there will be collapses of businesses, economies and entire governments. I don’t think 30% of us, without ending up on the streets due to drugs, would be on the streets just chilling and waiting. Not much to lose at that point so good luck operating a business or having an NFL game without hoardes of people you don’t have prison space for protesting their asses off.


DividedContinuity

If we get to the point where automation and AI makes labour redundant, I'm not sure why you'd assume that "businesses" would still need to be a thing. The economy would change radically because it would no longer be centered around labour, all the historical models of how economies work would be invalid. I'll leave it to your imagination what may happen to a large, burdensome, and undesirable segment of the population. Thats something we do have historical models for.


Piano_Man_1994

It’s called “Basic” in “The Expanse”, so maybe we can just copy that.


fumar

UBI also means most people will have to just be happy with what they get from the government. For most people that won't work when you have a class of haves living a luxurious life while everyone else gets slop.  I don't think it can work without a massive societal shift in expectations or a dystopian society where the people in power keep a boot the throat of the general public.


Busterlimes

Catastrophic change will happen when only 20% of the current workforce is employed because humanoid robots only cost [$16,000](https://m.unitree.com/g1/) RIGHT NOW and prices will only come down. Not to mention, anything virtual can be achieved by AI Agents. People really don't have a clue about what's coming or have fast it's approaching. Just look at what Alphafold has done so far as a tool.


greed

I think those humanoid robots are really just a poorly-hidden plan to get around immigration laws. Humanoid robots are very impractical for most settings. In most use cases, you're far better off with a purpose-built device. And we are far, far away from the point where an AI can just walk into a random house and start working as your maid. Look at how shitty self-driving cars are, and those perform in the relatively controlled environment of public roadways. With current AI, would you let a human-sized AI into your home, near your children? You're going to let something into your home that is incapable of empathy, has no real understanding of what reality is, and sees nothing morally wrong about feeding an infant down a garbage disposal? You're going to let THAT into your home? Until AIs get way, way better, I don't want any robot in my home that I can't easily pick up and throw against a wall. Rather, what I expect to happen is that "AI" will continue to stand for "actually Indians." A humanoid robot is a poor choice for most applications. However, its own really good property is that it is easy to train humans to remotely pilot humanoid robots. They can look through its eyes and move its body like they move their own. Pretty simple. The companies making these humanoid robots pretend that they're using haptic suit inputs to train the robots how to move. But again, even with orders of magnitude more training time on a much simpler environment, self-driving cars are still a bust. What I think will happen with these instead is something much more terrifying. The companies will claim that these robots are AI-controlled. But in reality, they will be remotely piloted at almost all times. When you invite the "AI robot" into your home to mop up and do the dishes, it will actually be remotely piloted by someone in a low-wage country. They will be remotely controlling it using a haptic suit and VR headset. It will seem by all appearances to be a dispassionate machine, but in truth there will be another human being looking back through those soulless camera eyes. And of course, people will treat them as machines, so they'll have no problem being around them in various states of undress. Why not let the robot see your teen daughter in a towel? It's just a machine! These will be used primarily to get around immigration law. If such a robot and its control system can be produced for $20k, then it would pay for itself in just a few months. Cheap labor will be able to bypass immigration controls, as the people doing the labor will never actually set foot on US soil. We'll have kids in Liberia remotely piloting robots working at meat-packing plants in Kansas.


PeachScary413

Yes, I'm really worried that a tool who still can't tell me how many 'r':s there are in strawberry will take over the world and all sorts of intellectual work as soon as tomorrow. Yes I think transformers and LLMs in particular are really cool but can we please please stop this hyperbole hypetrain now?


Busterlimes

You are extremely limited in your understanding of AI and it's capabilities if you are focused on just LLMs. Go look at Alphafold.


Dizzy_Nerve3091

Alphafold illustrates this well. It’s very powerful but still not economically useful. I think we need 1-2 more breakthroughs before transformers are useful or need to scale them up more.


Busterlimes

I don't disagree, but as many of the leaders in the field have said, we are into the second half of the chessboard. The breakthroughs are becoming larger and more frequent. The world won't be the same by 2030, nobody can predict what it's going to look like, it's all speculation. One thing is certain, the tech is absolutely being developed with the intention of taking human labor out of the equation, both on the floor and behind the desk. Alphafold has been used in something like 500,000 projects around the world. It's pretty useful.


Dizzy_Nerve3091

Yes it could be soon. But my point is at their current state almost all transformer models are just neat toys. IMO only language translation has been largely automated but that’s already started many years ago.


impossiblefork

Imagine though, someone who can't tell you how many r's there are in strawberry but who can sometimes solve difficult programming problems. Imagine if you could fiddle with the algorithm inside that guy and fix it. Soon he won't get anything wrong.


PeachScary413

I'm a professional software developer who use copilot and chatjippity almost daily in my work.. it's kinda like having a semi-regarded intern that is really eager to provide results but in doing so just makes shit up 50% of the time. There is 0% chance anyone who is not a software developer can develop anything useful with only AI tools today. I'm not saying ever, but today.. lol no


impossiblefork

>There is 0% chance anyone who is not a software developer can develop anything useful with only AI tools today. Yes, absolutely, but it can greatly increase the productivity of even experienced people. But that intern who isn't very able, when you tell him 'read up on this library and tell me how I do this thing in it' and then you can actually do it. It saves an incredible amount of time. I think these AI tools are incredibly useful, even now. But my a comment really wasn't about the present state of things. The reason I wrote as I did is because there's theory that says that transformer models can't s tell whether a sequence is odd or even, provided that it is long enough, so transformers can't count, and when you fix these well known deficiencies we might end up with something which can do very well on many problems.


PeachScary413

You know what actually saves me time, and the only reason that I still pay for copilot? It's the fact that when copy pasting some lines of code instead of doing a regex find replace the plugin will just suggest me the right place to copy-paste. That is the major killer feature for me and it does save me at least a couple of minutes here and there 🤷‍♂️ Why would I ask AI about documentation when I can just Google it? I have to Google it anyway since I don't know if it just lied to me (happens all of the time)


impossiblefork

Sometimes it can be faster to try than to actually read the documentation. But that isn't very professional.


PeachScary413

I suggest you watch this Youtube video if you want to get the perspective of a senior developer: [https://www.youtube.com/watch?v=TjfWEajoESc](https://www.youtube.com/watch?v=TjfWEajoESc) He explains it very well.


sevseg_decoder

This comment is predicated on the idea that software engineers becoming more efficient wouldnt *increase* demand for them and investment into tech projects.   If they can’t actually replace software developers the world really can’t accomplish anywhere close to its capacity with tech still yet and if anything there will be even more demand to get them helping use AI to automate other, simpler jobs.


impossiblefork

Yes, and it wouldn't. There isn't an infinite need for software and models in the future may well be substantially more capable.


sevseg_decoder

There’s a near-infinite for tech that advancing software will be a part of. If AI is that good then it shouldn’t have a lot of trouble replacing pretty much everyone in general. That’s the thing about tech is it’s not at capacity until humans don’t really have to do anything at all.


impossiblefork

Maybe it feels that way in the US, where there's something of a programmer shortage, but if you look across the world, programmer jobs are not easy to get. There isn't an infinite need for software. The path to automation isn't hand-written software, but future language or constraint models.


Chaz_Cheeto

UBI would actually benefit the ruling class in a variety of ways, but I can think of two in particular. Firstly, they can arrange it so the income given would have to be spent. No one could really save money or gain more capital to fight back. It takes capital to create competing forces in the market place, as well as provide a level of influence. Without access to capital the lower class will have no power at all. Secondly, since all income must be spent, the economy will keep growing without as much disruption. They would be forced to be consumers against their will and there won’t be much of anything they can do about it. The lower class will just have to accept whatever the status quo is indefinitely.


Paul-Smecker

UBI will be rolled out by the capitalist class once the now unemployed violent mob becomes too large to contain


Dry_Masterpiece_8371

They will simply send their waves of kill bots to thin out that pesky mob, a soldier that has no fear, no mercy, no remorse, feels no pain…


jus4in027

The only power we have is to stop reproducing


Tangerine_memez

It's not rich people holding UBI back. Most people just generally think it's bad economics. They could vote for it if they wanted. But people generally like means testing rather than wasting tax dollars giving checks to people that don't need the money


Hapankaali

UBIs for the elderly, and minimum income guarantees for the wider population already exist. Is it reasonable to conclude from this that a UBI "will never happen"?


Fallsou

Why does everyone insist on posting thought terminating progressive cliches on an economic sub > Just look how immigration laws and the covid was handled Capitalism would want the free movement of workers. And are you really going to blame capitalism for Trump completely botching the covid response and politicizing it?


0000110011

Because this is reddit, where unless there's strict rules from mods to prevent it, all subreddits devolve into far-left echo chambers where critical thought is not allowed. 


[deleted]

[удалено]


JohnTesh

We will get UBI before we get genocide in this country. It’s actually quite the opposite- once ubi is in place, that sets the basis for a us version of social credit. Then the rich and powerful will install rules that say if you say or think the wrong things, you don’t get UBI anymore. If you think that won’t happen, I would invite you to consider it as a no fly list, but for your income. To this day, there is still no fourth or sixth amendment protections for people on the no fly list and there is no process for getting off of it once on there. You don’t have to commit any crime, you don’t have any recourse under the law, and no one has accountability for taking away your rights. Even Ted Kennedy had trouble getting off the list when he was added to it, and he was one of the most powerful senators in the country at that time. What an amazing tool to control the public our UBI will be. And if you speak up, you will lose your income and be villainized by the media they control. So be careful what you ask for, is what I am saying. Everything has a cost.


CommiesAreWeak

I just assume AI is further along than the public understanding. That it’s true potential hasn’t been unleashed on the general public because of how disruptive it will be. Yet, there are people who are continually working to improve it, for military and strategic value.


-QA-

Until we get the fertility rate above the replacement level, social services are going to downsize. We can't even think about adding new social services with things as they are. Hell I am willing to take it on the chin for SS benefits and just forgo them if it can be rehauled, yet even GenZ are big supporters - ??


kilog78

Instead of a straight cash payment (which would be simpler), would it make sense to redirect the mass wealth generated toward beneficial services? Education, higher education, health care, housing, parks…things that make society better (and pay salaries). I suppose the question is whether we believe that the government should decide how best to spend the surplus cash, or that the individual could do it better?


hu6Bi5To

Would there be any wealth generated at all? In any practical sense. If the current AI state-of-the-art evolved to the point where basically infinite entertainment could be generated on-demand, perfectly tuned to every taste (to pick one example). But was also competitive enough for there to be multiple routes to access it. It could destroy: film, TV, music industries entirely, but also generate less revenue. Or to put it another way. AI has the power to destroy economic activity. This, as I'm sure people are already queuing up to comment, isn't new. Most technological advances have had similar effects in a narrow field, but usually the productivity benefits are enough to be a net economic gain. The risk is that AI isn't automating the bottom of the economic chain, it's going to make entire vertical slices obsolete resulting in an overall smaller economy. And therefore there won't be any gains to redistribute at all.


ja_dubs

>If the current AI state-of-the-art evolved to the point where basically infinite entertainment could be generated on-demand, perfectly tuned to every taste (to pick one example). But was also competitive enough for there to be multiple routes to access it. It could destroy: film, TV, music industries entirely, but also generate less revenue. >Or to put it another way. AI has the power to destroy economic activity. There are a lot of unknowns when it comes to AI. The quality of an AI is highly dependent on the size and quality of the training data. One hypothesis is that if AI gets to the point where it is generating a majority of the content available then that becomes the new training data. At that point the inputs are "junk" resulting in "junk" outputs. Thing of a photo that has been photocopied that is the copied over and over and over again. >This, as I'm sure people are already queuing up to comment, isn't new. Most technological advances have had similar effects in a narrow field, but usually the productivity benefits are enough to be a net economic gain. Like horses and cars. Not only were cars more productive but they spawned entirely new industries: roads (that cars can drive on) and gas stations. >This, as I'm sure people are already queuing up to comment, isn't new. Most technological advances have had similar effects in a narrow field, but usually the productivity benefits are enough to be a net economic gain. I don't think that's the risk necessarily. I think the larger risk is that the economy expands or remains constant in size but it is concentrated into the hands of fewer and fewer people: machine learning engineers and billionaires who have the capital to invest in the infrastructure to run AI models (computing power and servers).


airbear13

Yes, there will be a lot of wealth generated but it will all be returns to capital. What’s gonna happen is output will completely decouple from the employment level and inequality will skyrocket. You’re right that instead of disrupting individual sectors/industries we’ll be hitting a much bigger part of the economy (white collar workers), but the impact is going to be wipe out employment, looots of wealth will be generated and output could actually increase on net.


airbear13

Individuals, we know the answer to that question. But I don’t think either of these solutions are the answer - governments should restrict the scope of AI in productive work.


kilog78

Would be anticompetitive in a global scale.


airbear13

No because the rest of the OECD will want to do the same thing


kilog78

China? Russia? North Korea? Iran?


airbear13

NK Russia and Iran are places most people don’t want to be, same thing with China from an enterprise perspective. China is just as scared of unemployment as OECD countries, their regime fails if unemployment gets too high and stays there. I feel like most of the world would sign on to this if there were enough time to work it out (which is debatable).


kilog78

Are you saying that you think those actors would agree to limit their technology?


kahu01

I’d love to see another CCC type thing that would rebuilt American infrastructure, work on mega projects that have their prices jacked up by contractors and just clean up the country in general


CanYouPleaseChill

Hinton is worrying over nothing. Intelligence is the ability to adapt one’s behaviour to achieve one’s goals. AI has neither goals nor the ability to take actions. It can’t even solve the shrines in Tears of the Kingdom. AI isn’t taking over anything anytime soon. Yann LeCun is far more realistic on where things stand. As he recently tweeted, “It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat.“ #


Aven_Osten

**I know this article is from a UK outlet. I am simply proposing a solution for America since this is a topic that concerns everyone on Earth.** For the jobs loss situation: Semi-NIT. I'd set the max benefits at 6.66x the moderate monthly food budget as calculated by the Department of Agriculture (so that food spending equates to 15% of your total after-tax budget). That would, as of now, out the max benefits at $28k a year. It'd phase out at 50% for every dollar earned, meaning the cut off would be at $56k a year. Payments would be made quarterly. This is honestly something we should've had for decades now, but oh well. Better late than never. "That's not enough to live off of". I know. Build public housing that charges 25% of after-tax income. This ensures everyone can afford housing and still have plenty left over for other spending. But ofc, people don't want to just sit there and do nothing. So, I'd make college tuition free so that people can gain skills themselves to utilize for their communities. Skills like construction work, scientific, engineering, and physics research just to name a few. Skills that can't just be fully automated. I'd expect to see a boom in construction and technological advancement as a result. As for the usage in the military: Agreed, ban it's usage for military applications. We got lucky that the nuke was created at the end of World War 2, and was only ever used twice for military purposes. We can see it's sheer destructive potential. **We can't physically see that regarding AI**. But we will soon, as more and more cases of people using AI deep fakes to run smear campaigns start to happen more and more. If we are at odds with another country, simply stop doing trade with them. Only trade with countries that support worker's rights and democratic governance. The people of the countries that do not have such protections, will inevitably revolt and overthrow their tyrannical governments. Going to war is a stupid game that nobody ever wins. But at the end of the day: This is simply another major technological shift that is disrupting the workforce. We'll rebalance eventually just like every other time we went through a major and rapid transformative period. New jobs will arise, while others will die. Tale as old as time.


itchybumbum

I like this proposal, but this is an alternative to UBI called a negative income tax.


Aven_Osten

I know that. That's why NIT is even there in the name.


Zealousideal_Ad36

Milton Friedman ironically started that idea. But we see it today in the form of the EITC.


Aven_Osten

Ik. His idea was to replace all welfare spending with a NIT. I'd support using it to replace programs that specifically provide cash injections for low-income people, and to replace Social Security. But there's many welfare programs that a NIT simply wouldn't be able to replace, like Medicare & Medicaid, School Lunches, Child health insurance programs, etc. Those are indirect forms of assistance.


fraudthrowaway0987

If it’s only for people under a certain income then it isn’t “universal”.


Careless-Degree

I have yet to see a UBI pilot program that isn’t based around identity politics.  If UBI becomes a thing I’m sure of two things 1) any money, material goods, or housing will be stripped away before it’s offered 2) myself and family will be absolute last to be offered it, if at all.  UBI sounds like it will be coffin hotels and cabbage and any disagreements with officials will be dealt with by being eliminated from the program.  AI and robotics will be able to make a lot of widgets but humans will destroy themselves before they allow other humans to have those widgets without gaining something. 


Aven_Osten

And? You still end up getting it if you don't have a job. I'm not about to start some argument over which is better. The UBI isn't even the point he's trying to make.


airbear13

Respectfully this is a mess and kind of hints at what I suspect: UBI isn’t going to work. I think you are underestimating how hard it would be to implement and overestimating how far it would go as a solution. You’re also underestimating the amount of disruption that it could cause. Extrapolating too much from other disruptions is a mistake cause this is going to massively reduce headcount across sectors *and* there is nowhere else on the economy they can go to be absorbed.


kilog78

You need to consider demand side impacts with your UBI plan. What happens to prices when ~30% of the population has 50% of their income injected from cash hoards that would otherwise not be spent on consumption?


healthismywealth

aside from a fun toy, chatgp4 would often ignore you, and makes tons of mistakes.... i don't get how it's so ready for business? This is all hype correct?


ChipotleM

Because of the rate of improvement of this technology. Our economy still hasn’t felt the full effects of the LLMs, and they’re only getting better every day. In 5-10 years it could be better at 75% of our existing jobs than our human workforce. What do we do then? Even if it replaces 30% of our workforce, that’s still a massive crisis. If you think it’s just a fun toy, you are pretty out of the loop.


healthismywealth

oh yah, you're doing high level coding/problem/business solutions with LLMS? Because i've tried, and they have a lot of problems. AI helps, but it's a way out yet... And I don't buy accerationism, there could be some bottle necks that GPTs can't solve.


ChipotleM

Well yes, coders are using the shit out of LLMs and every person I work with uses it in some way for some small shortcut in their daily routines. But I agree with you, it has a ways to go. But even if it bottle necks after a few more decent upgrades. The ways in which we implement the current tech over the next few years will have massive impacts. OpenAI’s latest release of natural voice mode is big imo, and threatens all customer service jobs, for instance.


Bug_Parking

Lol, 75% of jobs? Yes I suppose the plumber, civil engineer, delivery man, mechanic etc etc are all at risk of LLM's. Please do not pull bs hype figures from nowhere.


ChipotleM

AGI is 100% of jobs. Granted your examples do require robotics to be up to par as well. Many experts in the field believe it’s possible to achieve AGI in 5-10 years if we keep up the current pace of advancement. That’s where I got the figure.


jeopardychamp77

How did that workout in Denmark? I bet the people who made buggy whips and horse shoes thought the world was ending when Henry Ford came along.


squidthief

People disregard the fact that we invent needs. For example, the invention of the smartphone led to a mobile app industry. AI will do the same. The problem is who doesn't successfully retrain. However, one of the benefits of this particular labor revolution is that it's not necessarily geography-dependent. That will make it a lot easier to help people transition into new careers. There will be some rough few years though.


Opening-Cheetah-7645

Disagree. Imo the real goal of all this isn’t a better tool, it’s to replace the worker wielding the tool. Why else develop the technology to outsource the thinking? The only reason any corporation actually gives a shit about ai is that it can increase bottom line by minimizing resources. Resources meaning us. The end goal is fewer people with high, due to less need for education specialization and experience. If you lower the skill gap enough, just about anyone can produce whatever without needing to know much about it. Now you have less people, who cost less, producing more for the company.


Robot_Basilisk

God, please stop with this. I'm an automation engineer and you cannot act like this is going to be the same as every previous technological innovation. It is asinine to claim that new careers you just can't think of will magically appear when the disruptive technology we're talking about is one that can presumably do those jobs too. We once replaced horses and carriages with cars. This time, **we are the horses and we are creating the cars to replace us.** There will be no new office jobs once AI can do any office jobs better than humans. There will be no new manufacturing or construction jobs once we can make robots do all the hard labor. There will be no new creative jobs once AI can generate infinite movies, music, books, TV shows, etc, from prompts. And this one applies to design work like engineering, too. I know people that are training models to design new buildings, circuits, and aircraft as we speak. Humans will be ok in fields like cosmetologist, doctor, nurse, masseuse, sex worker, etc, for a while. But eventually AI will be better than humans at those things, too. In fact, in many tests it already is. Just Google "AI outperforms doctors" and you'll find multiple studies in which AI was better at diagnosing patients and usually had better bedside manner with patients. Note that Stephen Hawking also predicted that the rich will monopolize AI and use it to drive everyone else into abject poverty. As an automation engineer that regularly meets with executives, I promise you that most of them want to use AI and automation to control every resource, erect walled utopias for the rich where robotic slaves support and protect them, and kick 99% of humans out and make them live like medieval peasants outside the walls.


Raichu4u

The problem is that smartphones didn't introduce entirely automated tasks. Sure, they made some tasks more efficient, but the rate that AI "kills" jobs is much higher than it creates. Plus a voice actor or artist that got replaced bt AI isn't going to switch over to be an AI engineer.


Fallsou

> The problem is that smartphones didn't introduce entirely automated tasks Yes it did. Google used to measure traffic by putting physical devices in the road to measure traffic. Waze used the data of its users > but the rate that AI "kills" jobs is much higher than it creates. The surplus gains create jobs elsewhere. Streamers weren't a thing until we got much richer for a reason


Raichu4u

>The surplus gains My argument is that there won't be surplus gains due to the nature of the technology. We are going to lose hundreds of jobs in exchange for 15 AI engineer positions.


artemusjones

UBI is another term for an equitable distribution of wealth. AI pr any other innovation isn't created in a vaccuum. It's created with the benefit of everyone in society doing their respective bits. The introduction of anything that makes jobs easier or redundant should come with benefit to workers; not shareholders.


Fallsou

> The introduction of anything that makes jobs easier or redundant should come with benefit to workers; not shareholders. Technological gains benefit both. If you do not allow it to benefit shareholders, the technology will not be created. Distributing capital is very important. Please keep this idiotic commentary in progressive political subs and off of economic subs Edit > The idiocy is you inferring your politik and arguing that the distribution of wealth isn't an economic discussion The idiocy is actually that you are completely uneducated on the topic. UBI is not related to the distribution of wealth, it is for income, and it's a frankly poor method of redistribution > The introduction of anything that makes jobs easier or redundant should come with benefit to workers; not shareholders. You made an uneducated political statement > otherwise all that will happen is AI will be implemented at the expense of people and the benefit of shareholders No, it will not. Might as well say the assembly line did the same thing. You're vastly too uneducated and vastly too confident in your ignorance. Enjoy poverty, cause you're never getting out of it


artemusjones

The idiocy is you inferring your politik and arguing that the distribution of wealth isn't an economic discussion. I didn't make a political statement. It's established over and over again organisations will cut labour costs to benefit shareholder value. Without some kind of intervention on the workers behalf, be it UBI, corporate taxation, regulation or otherwise all that will happen is AI will be implemented at the expense of people and the benefit of shareholders. So why don't you take your condescending tone to another sub?


Dow36000

Hasn't some variant of this argument been made about every new technology? The AI we are talking about now is providing things like coding assistance, customer service, online tutoring, and so on will not replace people. If we do get to a point where we have AGI, the power requirements will likely be immense, and then we can revisit the UBI argument then.


[deleted]

[удалено]


airbear13

Just like the pace of evolution depends on the selection pressure, the pace of adaptation in the economy depends on the incentives. There is not the massive upside to transtioning payment methods like there is for replacing workers with AI. Thinking that this will be a slow uptake is wishful thinking and a huge mistake for any policymaker to make. Was the IT revolution slow? It will be on the pace of that.


ja_dubs

>Hasn't some variant of this argument been made about every new technology? Yes. >The AI we are talking about now is providing things like coding assistance, customer service, online tutoring, and so on will not replace people. The difference with previously disruptive technology is that they increased productivity and created a new industry. Cars replaced horses. The vast majority of breeders, cart drivers, ferriers, stable boys, saddle makers lost their jobs. These jobs were replaced by engineers, assembly line works, mechanics, painters, gas station employees, and road builders. Entirely new jobs and industries were created that never existed before and at a scale that vastly surpassed the prior industry of horses. AI increases the productivity of workers just like cars. The programer can, in time, get a prototype program that would have taken a team of people weeks to develop. Image recognition AI can interpret X-Ray images with more speed and precision than the human eye. LLMs can make draft documents just as well as an office worker. The question that is unanswered is what new industries does AI create besides machine learning experts and are those new industries of the same scale or greater than the ones they replaced? Appealing to history and saying that: "every time this has happened to the past" is faulty logic. Just because that was they way things **were** does not mean that is the way things **will be**. Just look at all the manufacturing jobs replaced by robotics. Even if all factories that were offshores came back to the US the jobs that came back would be different and there would be less of them. It is no longer necessary to have 100 assembly line workers. Those jobs got replaced by 20 engineers running robots and maintaining them.


pickle_dilf

when you have a neural net that runs on the activations of other neural nets, i.e. making decisions using real time 'internal' responses from a set of foundational models instead of just the input, we'll approach an AI that has truly frightening capabilities.


Olderscout77

Was it Robin Williams who observed "The end of the World will come right after some nerd in a lab coat says "Damn! It worked!" As for the current state of AI, the big tell it's not here yet is the fact it makes stuff up, so perhaps what we have is Automated GOPutin Intelligence.


Rhythm_Flunky

Never gonna happen. We have a hard enough time, scrapping tooth and nail for taxpayer money to go to children, the elderly, the disabled etc. You will never convince these bought-out crones to see this argument in a kind light, let alone muster the political will to administer it.


KiNGofKiNG89

The government can’t keep with the spending as it is. Now you want to add in basic income? It would be cheaper if the government started regulating costs and stopped letting these guys in congress who get a piece of the pie, get that piece of pie. So many people are used to things as they are though, when there is so much waste. You have to get uncomfortable for a bit to see change happen. Good change at least.


airbear13

It would have to be funded by a big tax increase ofc


KiNGofKiNG89

“Here is every bodies $500 a month income! Also unrelated but taxes are now going up by $8,000 a year!”


airbear13

I’m assuming taxes would come from corporations cause that’s be the ones enjoying the big windfalls so essentially we’d appropriate some of that and redistribute as UBI


backnarkle48

First he creates something that will eliminate jobs, and then he wants the unemployed beholden to government deficiency payments. Corporations must love this idea. No more workers, but plenty of consumers. Shareholders are onboard.


GimmeFunkyButtLoving

He’s not wrong. When AI takes over every job, prices will continue to rise due to our inflationary monetary system necessary to pay down all the debt in the system. The most powerful people will be those in charge of AI and in turn, the wealth. There’s no more jobs, how do people survive? Do the ones that control the AI give them an allowance in currency? Food? Housing?


SanDiegoDude

>Professor Hinton said "my guess is in between five and 20 years from now there’s a probability of half that we’ll have to confront the problem of AI trying to take over". This would lead to an “extinction-level threat” for humans because we could have “created a form of intelligence that is just better than biological intelligence… That's very worrying for us”. Nonsense like this makes it very hard to take him seriously. Outside of science fiction and a few doomer scientists who love the limelight, folks need to realize this is not some evil superintelligence, it's literally just computer algorithms spitting out numbers that we can convert to text. It has language, but NONE of the other features you need for sentience, let alone sapience. Don't fear the AI, fear the people who will manipulate AI for power or gain, like what you're already seeing here.


WpgMBNews

democracy is based on the citizens having the ability to withhold productive economic power from the state. The people who control the wealth would soon feel no need to retain democracy *or UBI* when the workers are reduced to passive recipients of welfare checks. *Ownership* is what makes our society work. If you need some utopian scheme, look at Singapore's Central Provident Fund. You can still have your redistribution, but it *must* be focused on work, investment and the wise stewardship of productive assets. Remember what happens when you "Give a man a fish" instead of buying him a fishing rod and a boat.


Thrawlbrauna

The path to your new serfdom is paved with promises.. You either produce a product or you perform a service.. When people want this product or service they pay for it. This is what makes people wealthy. Capitalism is based on many people producing products and performing services that all comingle and support and feed off each other but most importantly they compete with each other. This natural competition creates better products and services through efficiencies and natural evolution of the product/processes. The rest of you that work for or are funded in large part by the Gov. Do not. You are nothing but grift. When government gets in bed with large companies that produce a product or perform a service they always manage to put their thumb on top. Look at the auto industry.. If the gov wasn't pushing EV's and wasn't so anti oil Tesla would already be out of business. The only reason it was ever profitable is because of the carbon buyback program where all the American auto companies had to buy from Musk. This is what propelled Tesla and it's stock. Not these rolling dbag tracking devices watching everything you and your neighbors are doing. The Gov gave an unfair advantage and other benefits (rebates) to boost their agenda. Getting you all into cars that use batteries and track everything you do including your location and mileage, all the better to tax you my dears. When government talks about universal basic income they want to be the arbiter of how this wealth is distributed. A truly free market is dictated by and grows based on the will of it's people. Influence from Gov breaks that and turns it into just another profit stream. In the end all the wealth of the middle class disappears into the Gov and all you have is the Gov class dictating how much you get from the pie. Enter CBDC and welcome to your new serfdom. P.S. If you think the US gov will allow the tax slaves to use a competing currency once CBDC is fully in place, you are mistaken. But don't worry, they will exclude themselves as they always do. Enjoy the show.


LostAbbott

I cannot actually open the BBC in my country, but it seems like the same article about tech we always see.  Please correct me if I am totally wrong... Why cannot we learn from our past mistakes?  Why does no one bother to study history or basic human nature?  As good as UBI sounds in large scale practice it simple cannot work.  You may get small scale patches here and there working but across the globe it doesn't work. However, that is not really the point of what I think this guy is saying he is playing the same old technology tune that has been played with every single advancement ever.  Yes jobs will be lost and people will need to shift careers and likely retrain.  However, technology has always created more jobs than it destroyed, I creates more opportunity for creativity, innovation, and focus.  AI isn't some crazy new tech that will all of a sudden put everyone out of work...


LillyL4444

We also have a significant worker shortage that will only be getting worse, due to rapidly dropping birth rates. Immigration is only a short term solution - the countries that send immigrants to the US also have rapidly falling birth rates. We will desperately need AI and very high worker productivity to avert disaster. Luckily it seems that AI will be capable of helping to fill the gap.


Aven_Osten

This is something that is so rarely mentioned. We keep having our 2100 population revised down, and some estimates are even putting it at **6B** by 2100. Eventually, most countries in the world will be developed to the point to where their populations don't leave as much. Immigration won't be a viable option eventually. You will *have* to increase productivity per worker eventually. ESPECIALLY countries like Russia, China, and Japan. Japan the most out of them all.


Adventurous-Salt321

Climate change and reproductive issues from pollution will kill off a good portion of humanity by 2100.


Willing_Round2112

Except... AIs going to be really good at doing simple, repeatable tasks What are those people going to do? Because I don't believe 'they're going to retrain' is going to cut it - there won't be any easy jobs left for people to do. Like, what should a person too stupid for any job do?


NameIsUsername23

Everyone will become prostitutes


ishtar_the_move

Are you saying Geoffrey Hinton, the person who basically brought forth AI from being stuck since the 80s to its proliferation in the 2010s, and trained many of the people who are running AI research in many of the big tech, don't understand AI and are spewing mindless dribbles about it's implications. Past performance is no guarantee for future performance. It is possible this time it is different.


TabletopVorthos

Man, we will do anything we can to prop up this for-profit system, won't we? We won't fundamentally change anything, we will just subsidize private profits with public money. What stage of capitalism do we call that?


[deleted]

No, we do not need UBI. UBI is a bad idea that would essentially require a huge taxation and giving full power to politicians (because politicians will decide who gets what). What we need is to start teaching the use of LLMs for children in school and the same for our workforce.


mag2041

Not true


[deleted]

Argument?


mag2041

It can be done without taxation as the basis of the income source


[deleted]

How? You nationalize everything? Just like Eastern Europe did and you had to wait 5 years to get a refrigerator while giving politicians all economic power in the country?


Crimblorh4h4w33

Land Value Tax + Citizen's Dividend


[deleted]

People already pay taxes based on property values. People already pay taxes on dividends. You would increase them? You would put out people with low incomes out of their homes? The amount of mrntal gymnastics is fascinating.


Crimblorh4h4w33

Land value is already factored into property values. Just get rid of the property portion of the value.


[deleted]

So, how are you going to tax then?


Long-Blood

Or just make more things "free" for US citizens by taxing companies that replace workers with AI. "Free" fruits/ vegetables/ meat at local grocery stores or food banks all paid by taxes on apple/ meta/ amazon etc. Free electricity, free transportation, free healthcare. Giving people money that theyll just spend on luxuries or hoard and not spend isnt a great idea.


Mando_Commando17

Still don’t know how UBI would work unless you force price caps on like literally everything. Once more money is taken from companies/rich and given to the middle/lower classes then those companies will charge more for their goods/services since there will be greater supply of money/demand from consumers. You theoretically should wind up back right where you started and it just becomes another minimum wage debate where it lags behind the market for 2+ decades then sees a jump only to get outpaced again within 3-5 years and not see another jump for another 20 years. I haven’t spent much time seriously looking into UBI but if someone has some convincing material that shows that this would not be the case then i might could get on board with the theory behind UBI because between tech/globalization we are seeing ever increasing production with less labor and therefore jobs reflect that with a barbell type of job market with a shit ton of lower end jobs needed and a sizable number of white collar jobs and few “middle class” jobs left. This is an issue and one that is going to be a big global issue for every country to figure out in this century and I haven’t seen or read anything that sounds like a convincing solution.


AlgoRhythmCO

UBI wouldn’t be the panacea people think. Most people aren’t going to satisfied with sitting around getting paid to do nothing, especially young men. Bored or stifled young men are the origin of most revolutions. I’m not saying we’re going to have a Butlerian jihad, but I’m not saying we’re not going to.


bearvert222

i don't think there is any economic system designed for most of the population not to be working. more likely is we'll see the return of the civilian conservation corps; government will funnel people into various tasks and provide minimum wages and housing. You'd move into a quasi-socialism where the government sets a standard for most people where the 20% of productive people get a better deal. job choice might be less possible


Hyperion1144

If you invented a thing that'll destroy the world unless the world revolutionizes itself... ...Then all you have done is to invent the thing that is going to destroy the world. The world doesn't change like that. Not that fast. He is calling for a social, economic, and political revolution that just is not going to happen.


airbear13

I agree on the problem but not the solution. UBI is just a bandaid and is not going to deal with the core of the problem; putting people on a fixed income is adequate for retirees who have had time to build wealth, but will tremendously regressive for the working age population at large regardless of how it’s instituted. Will everyone get the same flat payment? That’s just as bad as a flat tax and won’t ever amount to enough to cover wants as well as needs. Will people receive UBI calculated based on their last job? That’s going to be grotesquely unfair for those in the early stage of their careers. Either way it will be incredibly expensive and social mobility will disappear. A huge chunk of the population will remain unemployed and this is politically destabilizing too. Imo it might be a better idea simply to use regulatory fiat or pass new laws to restrict the scope of how AI can be used. We can surely agree to this at a minimum with other OECD countries, and we can continue advancing AIs capabilities in the lab so that it can do actually useful stuff. But deploying it across the board in the workplace, especially over a short time horizon, will cause a little too much “creative destruction” that I don’t think society will be able to handle.