Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Political/War Lounge > GovOS

GovOS
Thread Tools
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Dec 12, 2024, 09:30 PM
 
I'm going to articulate this badly. As usual.

I can't remember what I was reading or listening to at the time, but I recently decided that government (in the UK anyway) often sets policy a bit like this:

The cabinet and a few more of the ruling party's inner circles will all sit down and discuss what policies they'd like to implement. Some of them they will then rule out or water down because of how the people, or more likely the press will react to them.
With the policies that survive that PR filter, they will then get some civil servants and ask them to crunch numbers and find out "What will happen if we do X, what happens if we do Y."
The civil servants will then report back their best projections and predictions and the governments then decides whether its worth doing based on that information.

What I now doubt, is the complexity and comprehensiveness of any modelling being when it comes to these policy ideas. I suspect there are some pretty fancy economic models for the big ones like when you put a tax up or down, who will it affect inflation or the price of petrol etc. But I now believe there is all manner of much more complex consequences that go completely unconsidered by governments making important decisions.

Trump's tariffs are likely to be a good example of this. Our Labour government has done som half-assed prep work on its policies surrounding the winter fuel allowance and inheritance tax on farmers. I'm certain there are myriad more examples all over the world.

So this got me to thinking about if or how you could build a proper predictive model for government. With the advent of AI, it should just be a matter of time, effort and data. What I'm proposing is essentially a new version of democracy for the modern world, an Operating System for Government which I'm calling GovOS.

The foundation of this OS is to record a lot of statistics. But not just a lot, they need to be the right statistics. Then the AI can find patterns and begin to understand how policies affect people, economies, countries.
I understand there are serious security concerns, but I believe more and more that having an app available to every voter would be a real benefit to a modern democratic model. It solves a number of issues:
It gets people more in touch with politics.
At least when a half-decent government is in charge, it can act to counter propaganda and misinformation.
In the UK, politicians avoid giving their personal views on things like they would a plague. An app would allow elected officials at all levels to poll constituents on what they think or want.
Perhaps most importantly, if this system were to prove sufficiently secure, it would make referenda cheap and fast so the sorts of fundamental constitutional changes that seem to be needed and/or need to be stopped, can be voted on by the public. In the US, this might codify abortion rights or abolish (or at least reign in 2A.
I'd love to use this app to make it easier to remove bad people from office more easily.

I have lots more thoughts on how I would re-write democracy for the 21st century but it's late and I'm tired. Ask me about the spirit of the law, garbage collection and my informed electorate clause if you're interested.

What else does GovOS need? I'm not interested in keeping anything for the sake of tradition or because thats how it's always been done. I'm interested in making the needed changes, no matter how revolutionary they seem.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Laminar
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Dec 13, 2024, 09:37 AM
 
Originally Posted by Waragainstsleep View Post
The foundation of this OS is to record a lot of statistics. But not just a lot, they need to be the right statistics. Then the AI can find patterns and begin to understand how policies affect people, economies, countries.
We already have that. Scientists and economists are already doing this. We have the answers. We are not short on data, plans, or even the money to optimize government for the common good. We know exactly how we could make huge strides against infant death, maternal mortality, child hunger, and many other problems that are almost entirely uncontroversial.

More than 1,000 economists signed a letter advising Hoover not to enact tariffs in 1930. He still did it, and everything they predicted came true - prices skyrocketed, the export market was demolished, and the entire world was driven headfirst into the Great Depression.

We have the answers. We know what works and what won't. What we are missing is the will to do it. What is not agreed upon among the general population is that the government should work for the common good. A really popular narrative in the US (and likely other places) is that government by its nature is inefficient, ineffective, and ruinous to innovation, progress, and any one individual's success. The ONLY acceptable change is some peoples' minds is to take funding, power, and influence away from the government and let people live their lives as big, strong, rugged individuals. In this reality, any proposal for a system of governance will first have to overcome the decades of brainwashing that has led people to believe they're actually worse off with all of the clean water, breathable air, relative safety, and financial opportunity that the government has afforded them.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Dec 13, 2024, 10:21 AM
 
Have you ever read Albedo? They basically have this.

One of the problems I remember them having is people (well… furries) game the AI by falsifying data. Garbage In, garbage out.
     
Laminar
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Dec 13, 2024, 11:48 AM
 
Actually, I'd wager this is already happening, but in the wrong direction. Conservative-funded think tanks are generating legislation and passing it to the politicians they're bribing, who are just making it the law. I wouldn't be surprised if they were using AI to help model outcomes to drive society toward their goals, which are decidedly not "the greater good."

https://www.thegazette.com/article/t...wa-statehouse/

https://iowastartingline.com/2021/05...ew-voting-law/

In the video, the executive director, Jessica Anderson, claimed she and her organization helped draft the bill and organized activists to voice their support at public hearings before its ultimate passage.

“Iowa was the first state that we got to work in and we did it quickly and we did it quietly,” she said in the video. “Honestly, nobody noticed. My team looked at each other and we’re like, ‘it can’t be this easy.'”
     
Waragainstsleep  (op)
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Dec 14, 2024, 07:08 PM
 
I know this data is all being gathered, but it either doesn't make it to government, or it gets ignored. I'm proposing it gets built-in.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Laminar
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Dec 16, 2024, 10:29 AM
 
That's not an AI problem, that's a human will problem. Why would the people who disproportionately profit off of the current system allow something to upset that balance?
     
Waragainstsleep  (op)
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Dec 17, 2024, 09:57 PM
 
I've noticed that when an idea different enough to the status quo is suggested, the vast majority, even of decent intelligent people, will find ways to resist it that tend to involve practicalities and more often than not will cite 'human problems' like corruption or other bad actors and actions looking to preserve their status or wealth or gain some new advantage at the expense of others.
We should not be so quick to accept these obstacles as inevitable and just give up. Human nature can be curtailed, modified, influenced and so on. We just need to add extra features to good ideas to nullify these bad actors, just like closing any other loophole.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Laminar
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Dec 18, 2024, 10:26 AM
 
I'm thinking about two different things. The first is whether or not it's a good idea conceptually. My impression of AI thus far is that it's a way for tech companies to further themselves for accountability of the output of their software. For decades, Windows has had immense bugs and problems. This code was written by people. And with so many of these bugs and problems, they can't figure them out. But if they dedicate significant time and resources, they can logically trace back through the code and figure out why it's doing what it's doing. I see this with software vendors at work. I had one machine that would randomly go unresponsive once every few days. They stationed a software engineer at my site who sat there for 8-10 hours per day analyzing the entire system top to bottom. After three weeks of this, he and his team identified the bug deep in their software stack, figured out how to mitigate the issue, then issued a new version of their software that didn't lock up like before. That was a huge cost for them on a (relatively cheap) $15,000 machine.

I have another system that datalogs to a virtual machine managed by IT. It will randomly stop logging, through all of the monitors claim comms are still good. I've contacted the vendor several times and all I get back is "lol i dunno, have you tried updating Windows?" I know they're strapped for resources because I'm frequently dealing with the president of the company, which is a bad sign - the president of the company shouldn't be coordinating my shipments. But they definitely don't have the resources to dedicate a whole team to this issue for several weeks to solve it, even though it's likely entirely solvable.

AI is the fuzzy space in between the input and the output with no logical traceback, so now when an AI model does something wild or crazy or out there or entirely and completely wrong, they can just say, "lol i dunno." The output is the output, there is no checking or verification that the output is good or correct or logically based on the input. In my mind, this is a bug, not a feature.

Since the output is a fuzzy outcome of the input and since the output determines the distribution of money, power, and lives, how do you ensure the input is neutral and correct? How do you prevent someone from steering the outcomes? How do you traceback decisions and double check that the output isn't being corrupted by someone trying to benefit disproportionately? These are all problems of our current system and putting an untraceable, untrackable fuzzy language model in between doesn't solve any of those current problems.

The second part, beyond whether or not it is a good idea, is a realistic path to implementation. How does a country hand over the reigns of power to a computer model? It wouldn't be just a step forward to implement, it would be an entire restructuring of the government hierarchy. The rich and powerful already have a significant portion of the country living in an alternate reality, one that they guide with whatever narratives they deem profitable. Any change in government that leads to more equitable outcomes for the whole population would absolutely be opposed by the current hierarchy, so all they have to do is say that the AI is pro-abortion or has they/them pronouns and they'll have an army of Oakley-wearing dudes in Dodge Rams terrorist-attacking whatever facility houses the AI.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Dec 18, 2024, 10:32 AM
 
Originally Posted by Laminar View Post
That's not an AI problem, that's a human will problem. Why would the people who disproportionately profit off of the current system allow something to upset that balance?
See also: The dead UH CEO, and the company’s implementation of AI to better facilitate claim denials.
     
   
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 07:09 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,