2

Bill Gates Predicts 'The Age of AI Has Begun' - Slashdot

 1 year ago
source link: https://slashdot.org/story/23/03/26/2252225/bill-gates-predicts-the-age-of-ai-has-begun
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Bill Gates Predicts 'The Age of AI Has Begun'

Slashdot is powered by your submissions, so send in your scoop

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area.
×

Bill Gates Predicts 'The Age of AI Has Begun' (gatesnotes.com) 107

Posted by EditorDavid

on Sunday March 26, 2023 @06:58PM from the rise-of-the-machines dept.

Bill Gates calls the invention of AI "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," predicting "Entire industries will reorient around it" in an essay titled "The AI Age has Begun." In my lifetime, I've seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows.... The second big surprise came just last year. I'd been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn't been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you'll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:

  • "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
  • "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
  • "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you're losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
  • "AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it's hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way."
  • AI will "help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit. I expect that there will be a lot of innovation in this area.... AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment."

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

  • I think Rosie is a long way off yet.

      • Remember when anything pertaining to Bill Gates got a nice Borg glyph?

        They call this astroturfing. And our buddy Abbott who runs this site (into the ground) is a pathetic gaslighter and pathological liar.
    • Re:

      We'll have a Cyberdyne Systems Model 101 long before we get a sassy maid.

      • Re:

        A glitchy and snarky homebot seems very realistic.

    • Re:

      I bought my wife a Robot Vacuum cleaner for Christmas. We(or maybe Me) named her Rosie. Rosie is Awesome. Couple of dogs and cats and she takes care of her business, Pet hair and other debris, Rosie will park herself and we can empty her bin into the garbage,

    • Hush. As the BOFH would say, the preaching for ChatGPT and friends has been done in secret, and received as gospel: all that's left is for the board to learn the error of their ways, in 3 years, when they find that they've been taken for a ride (again), and reach for the shovel to bury yet another skeleton in an unmarked grave.

      Meanwhile, we, as fellow BOFHs, should stop complaining about the "AI Revolution," and find a way to profit from the scam. I mean, really profit from it; I'm not talking about simply

  • The reason why we point and click is that it's often faster to make a choice via pushing a button than by typing or speaking a request. Non-verbal methods of communication actually work better for a lot of applications.
    • Re:

      Then there's touch screens. An order of magnitude faster than a mouse.

      • Re:

        Same idea.
      • Re:

        You did switch to smartphone for your job, isn't?

      • Re:

        Sure, as long as you are working on a cellphone, and only have to move your thumb. On anything bigger than a tablet (over about 10" in fact) a mouse or trackball is faster than using touch. Your hand also doesn't cover the display, so you don't have to move it away to see things, then move it back to touch things, ad infinitum.

        If you only have to hit one control, a touch screen might be faster. After that, it probably isn't.

    • Re:

      LOL this is +5... the fall of slashdot.

      • Re:

        It's easier to know where to go and click, sometimes, than it is to express that you want to adjust the settings for that thingamajig you last used two years ago and can't recall the name of.

        Also consider the quicklaunch bar in Windows where you put a bunch of really frequently used programs. Do you really want to go back to, essentially, a command line interface to run those instead of just clicking? Because make no mistake; having to type in commands 'in plain English' (and let's not get into translations

  • ... is definitely not enough for these large language models.

    Btw, let's define AI first. Doing a glorified linear regression is *not* AI. Deep learning is a very impressive way to get to models that massively overfit data. And can mimic humans extremely well. Enter Chat-GPT. However, this is *not* intelligence. ChatGPT knows 4*9=36 because it saw it on the web. Give it a 4 digit number times a 5 digit numbers, it could fail.

    DISCLAIMER: don't get me wrong, Chat GPT is impressive. Just not A(G)I. And Bill Gates should know better.

    • Re:

      A good take on the current state of things: https://www.samharris.org/podc... [samharris.org]
    • Just as "artificial leather" is not real leather, so too "artificial intelligence" is not real intelligence.

      The distinction I am making is purely semantic, but also completely relevant. AI is a very old and very broad term that includes a wide range of ways in which a computer can be made to do things that otherwise usually require intelligence to do. Like play chess. Or pass a biology test. The claim made by the coders is not that they have built an authentically intelligent machine. That's not what t

      • Re:

        I think the people who work towards AGI would disagree with you. They're trying to create something *really* intelligent, that is not human. Artificial meaning "not naturally grown human" in this context. Not "faking it" like "faux leather".
        • Re:

          "AGI" is a different acronym than "AI." When someone says "AI" they aren't claiming "AGI."

          You don't have to take my word for it. The simple, easy to understand, and in-popular-use definition of "Artificial Intelligence" is right here in the dictionary [merriam-webster.com]:

          1 a branch of computer science dealing with the simulation of intelligent behavior in computers
          2 the capability of a machine to imitate intelligent human behavior

          Notice the words "simulation" and "imitate." Much like simulated leather, or, imitation leather

        • Re:

          Anyone who claims to be working towards AGI shouldn't be taken seriously.

        • Re:

          Or you should find faux intelligence acceptable and to the point.

      • Re:

        You are right about the ever moving target. Fundamentally, there is no adequate definition of intelligence. For many, it basically boils down to "can do stuff machines can't", in which case (strong) artificial intelligence is impossible, not due to any limit of the machines, but by definition.

        Because of the inability to define precisely what intelligence is, we can fall back on the "as if" test. Treat it as a black box, and if it behaves as if it is intelligent then for all practical purposes, it is intelli

      • Re:

        I agree with this. The problem is, even if the marketers are not claiming this is "strong AI", they are not making any real distinction. And because the general population doesn't understand that there is one and tends to think of AI as synthetic intelligence, they will likely continue to try to interact with AI as if it were a synthetic human, expecting it should behave like some kind of more perfect version of themselves. And this all while the companies producing it control the data and profit from thei
        • Re:

          Yeah. The emerging world requires a higher level of intelligence than the world of yesteryear did. People who don't understand tech are going to be left behind. It is unfortunate but natural selection has never been known for its kindness.

      • Re:

        The problem with calling anything other than AGI "AI" is that nothing less is actually intelligent. It's just a system that relates data to other data. It doesn't have sanity checking because it doesn't have a mind to be sane or insane. It's just an algorithm for shuffling data. It will produce an identical result every time given the same starting conditions, and if it didn't, it would be worse and not better.

        Most of this stuff is just "machine learning" [algorithms] and the "learning" isn't meant to imply

      • Re:

        It seems that it is impossible for most people to understand, even for people who are otherwise technically-minded, but AI is a class of algorithms. It has nothing at all to do with creating an machine that is intelligent.

        Users expect that, though.

        Semantic btw means "meaning." "It's just the meaning" shouldn't actually minimize something at all. What the meaning of the words are is actually important, other you can just grunt or mew.

    • Re:

      Gotta agree. I can say with respect to medical insurance, Bill Gates is pretty much dead wrong. AI can't make medical claims easy if the people who approve the claims change the rules with no notice. Which they will, because claims management is a constant push and pull over who keeps the premium dollar not a high school history paper.

      I recently heard a VP at Geisinger say that their efforts to use current "AI" in charting had also not yielded meaningful efficiency gains.
      That's because the humans in

      • Re:

        I would seriously doubt that. LLMs don't have any capacity similar to 'understanding' or 'reasoning'. They can't analyze a problem.

        That's even more dangerous, as it give you a false sense of security. LLMs will, as a natural consequence of their operation, say things which are false. You simply can't trust the output.

        You might not remember this, but back in the 80's, expert systems were the hot thing in AI that was going to revolutionize medicine, and they wouldn't flat-out lie to you.

        That's correct. T

        • Re:

          I would seriously doubt that. LLMs don't have any capacity similar to 'understanding' or 'reasoning'. They can't analyze a problem.

          It's already happening. A few days ago I had some absurdly long SQL queries with syntax errors. I asked ChatGPT to fix them, and done it handed me the fixed queries, certainly saved me time and a headache. I was trying to track down a weird website behavior, described it to ChatGPT and the first suggestion worked. In practical terms it can certainly analyze simple problems and e

          • Re:

            I don't even know where to begin... The things you think it's doing are not the things that it is actually doing.

            Simple errors in syntax make sense at least, as it operates on probability.

            No, it can't. That's simply not how these kinds of programs work.

      • Re:

        Currently, in the US, this is true. It is less true in a lot of other countries. And if we eventually adopt medicare-for-all or some other similar system, perhaps medical billing will become vastly simplified and not be closely connected to approval decisions.

        10 years ago this was still a (silly) debate, but it is not one now. Liability insurance is accountable, and it doesn't really matter that much who is required to buy it as long as somebody is buying it.

    • Re:

      Linear regression, glorified or otherwise, is absolutely AI. It's just not what you think the term 'AI' should mean.

      Overfitting is a bad thing.

      This is correct.

      You've hit at the heart of it. There is nothing even remotely like 'understanding' or 'reasoning' in models like these. The output is certainly impressive looking, as long as you don't look too closely at it. It'll be interesting to revisit these threads once the hype dies down.

    • Re:

      Gates didn't make any claims about the current models being AGI, and neither has anyone else.
      I believe you're falling into the same trap as many other critics. You're conflating the hyperbole and exaggerations used by marketers to ride the wave and sell their products, with the real news and developments that are being announced by engineers and researchers.

      Let's!

      If you're spent much time with the models, you'd be aware that when they return a mistaken answer, you point out the mistakes (without giving the

    • Re:

      Come on mate. Not that discussion again.
      Artificial intelligence has been fairly loosely defined for decades. In the 80s, AI was already a mash of statistics, signal processing, and operation research. And people had moved from AGI.
      Besides novelists and film makers, no one seriously talks of AI to mean strong AI. Everyone pretty much understands it in term of sensing, modeling, deciding, acting.

    • Your view on this does raise the question: if ChatGPT is not âoeintelligenceâ, then what is?

      In the end, are we humans not merely extrapolating data that weâ(TM)ve gathered via our senses?

      What magic sauce is required to call behavior âoeintelligentâ?

  • I'll use and enjoy said technology as long as the AI behind it exists solely on my devices and 100% in my control only. If it goes to the cloud, then fuck no.

    • Re:

      I am determined to learn from the dystopian fiction of my past (I Have No Mouth and I Must Scream [wjccschools.org] is a great example) and suggest that we work to create AI that likes us and does not take the suggestion to "Kill all Humans" as an order

      • Re:

        Impossible. Humans don't even like humans.

        Hate! Hate!

  • The only thing worse than dealing with a faceless bureaucracy is dealing with a remorseless AI.
    • Re:

      ...owned by a faceless bureaucracy

      • Re:

        Worse: Owned by an international corporate conglomerate who doesn't even bother with bureaucracy anymore, they repeat the AI's decision.

  • How can you predict when something has already begun? That would be like saying "I predict I've gotten into my car." It's literally talking in past tense and on something that's observable. Like half the stuff the "predictions" already exists in some capacity and has existed for at least a decade now. With that said, last I checked, the AI to read scans for cancer is better at seeing them than humans, but also will have more false positives.
    • Re:

      In this case, the prediction is shorthand for people in the future -- probably decades from now (assuming any survive) -- will look backwards and declare that (a) there was, or is, an "age of AI" and (b) it started by (and continued past) this point in time.

  • What a great prognosticator! When my coworkers and I were saying this three years ago when we first saw DALL-E Mini and GPT-2

    • Re:

      I've been predicting yet another wave on the typical AI hype cycle. We'll see how Bill's prediction looks in 5-10 years, but I suspect it'll be 640k all over again.

  • he missed the prediction that people like him will get even richer and the gap between the have's and have-not's will become even larger.

    • Yes, I know it's hard to see from the USA, but the data is unambiguous; the application of lessons of economics means that the really poor are a lot less poor these days. There's no reason to think that won't continue

      https://data.worldbank.org/ [worldbank.org]

      • Re:

        What does the poor being less poor have to do with the gap between the poorest and the richest?

        • Re:

          In international terms the gap is closing. The amazing achievement of the Chinese in raising the living standards of hundreds of millions is outstanding and unprecedented. And much of the 'wealth' of the richest is in stock market values rather than anything real. Sadly there is no data for the world overall - at least not from the world bank - but this table allows you to look at trends in individual countries.

          https://data.worldbank.org/ind... [worldbank.org]

    • Re:

      Failure to tax and redistribute the profits fairly is entirely a political choice. Don't blame the AI, if you put the AI in charge of the government it'd probably make a more optimal decision about redistribution to maximize economic potential.

  • The best indicator that something won't happen is to have Bill Gates predict it will.

    • The best indicator that something won't happen is to have Bill Gates predict it will.

      Yep, first he was gonna kill linux, then he was gonna kill google, then he was gonna kill a bunch of diseases which have just retreated to countries he can't get into because you can't get vaccinations from the gates foundation unless you adopt strong IP protection for big pharma.

      Gates never did anything he said he was going to do, but he sure did make a lot of money not doing it.

      • Re:

        Bill Gates, the fake philanthropist.

        The Gates Foundation is the largest philanthropic organization in the world and has invested billions of dollars in companies whose practices run directly counter to the foundation’s supposed charitable goals and social mission.

        In Africa, the Foundation has invested hundreds of millions of dollars in oil companies responsible for much of the pollution causing respiratory problems and other afflictions among the local population.

        The Gates Foundation has inve
  • people use graphical interfaces because they don't want to type for things. i'm not sure people (particularly older citizens) would like to type to the OS. and a lot of people don't like to talk to their computers instead.

    I predict chatgpt will be integrated with windows in less than a year.

    having said the above, imagine if you could give natural english instructions as a script.

    for example "install latest nvidia driver and disable all optional components" or "change my dns to 8.8.8.8"

    it could allow non te

    • Re:

      Given the output I've seen from ChatGPT, and how much effort you have to go to in order to get it to give accurate answers, I wouldn't trust the software to get either of those things right. Maybe someday, but not soon. There is no sanity checking, because there is no sanity, and that means you have to be the sanity check yourself.

      • Re:

        Having it show what command it's about to run and ask for confirmation is still useful. There's a lot of us who haven't memorized every command line parameter of everything who still understand enough to recognize if it looks right or not.

    • Re:

      This has been the dream of pointy-haired-bosses (and other people who don't understand how computers work) for a very long time, because they view typing on a keyboard as something that is beneath them.

      But giving "natural English instructions" will never work, for one simple reason. There are too many ways to say the same thing. And so you have to learn which words and phrases you need to use in order to perform a certain task.

      Hey! We already have that. It's called writing code.

      • Re:

        The bigger problem is that natural language tends to be ambiguous.

    • Re:

      You have two choices:
      1. Specify your task completely and precisely enough for the AI to do it. You might even need something better than English for that. Uh oh, you've just become a programmer again.

      2. You keep trying to describe what you want in a very long conversation with the A.I. After going through that arduous process a few times, you eventually learn to be more complete and precise to save time and trouble. Uh oh, you've just become a programmer again.

  • All this marvellous technology will be in the hands of giant monopolistic concerns who will be so large and so entrenched in our society with their technology that the law will not apply to them anymore, and they will use AIs primarily to make as much money out of people as possible and morals be damned.

    Oh wait, it's already the reality...

    • Every time this is predicted, it never actually happens and in practice most of the poor do benefit.

      • Re:

        Historically speaking, when wealth inequality gets this bad, certain people tend to... er... lose their heads.

        The real question is why you're so hot to simp for the ultra wealthy?

        • There's a plenteous supply of bread and the circuses offered by the Internet make the efforts of the Roman Empire look paltry.

    • Re:

      I live in government-certified poverty, and like everyone else I have access to talk to a variety of AIs (too many varieties for monopolies to be possible) and use them to solve tasks for me. Of course like anything else it'll be more profitable and powerful for the rich because of the tasks they have for it, but unlike most technology which is only available to the rich for a long time, AI seems rather more egalitarian.

  • He initially thought the Internet wouldn't take off and neutered Windows on mobile by insisting that Windows CE use the start menu like the desktop.


    AI today is smoke and mirrors. Humans can infer patterns and make predictions based on limited data sets, AIs need a massive amount of data to do anything, and even then can't handle black swans. Things like ChatGPT are just Chatbots/google searches with a sophisticated frontend.
    • Re:

      He was initially just in the right place at the right time and everything cascaded from there.

    • Re:

      You really do not need a Black Swan to make the current Language-Model type of Artificial Idiocy hallucinate the most stupid nonsense.

    • Re:

      I had an HP 320LX back then. I can tell you that keeping the start menu was absolutely the right decision. You must not remember what a phenomenon Windows 95 was. There were people buying copies that didn't even have computers. The start button (and start menu) was a big part of that. People knew about it and new how to use it.

      It also made those palmtop toys feel like a 'real computer'. Seeing Word and that start button gave you a lot of confidence that it was going to work properly with your desktop.

      • Re:

        Palm used a bunch-of-icons launcher on the Pilot like all smartphones have now. This actually predates wince even having a start menu — wince didn't come with the start menu until CE 2.0 in 1998, while the Palm Pilot is from 1997 and it was based on lessons learned when they made the software for the Tandy/Casio/GRiD Z-PDA-7000/Zoomer/GRiDPad 2390.

        But even before that, there was the Newton, which first shipped in 1993 — and which also had a bunch-of-icons launcher.

        The start menu was a brilliant

  • Not even one election cycle ago people were rolling their eyes at Yang and his platform of UBI. The media tried to ignore him - the same media now awed by AI. Remember:
    1) You need to vote for UBI, it won't arrive on its own.
    2) You need to vote for it before you are homeless - can't vote after you are homeless.
    • Re:

      In California you explicitly can do so. In other states, maybe not.

    • No

      1) No, I don't need to vote for UB
      2) No, because UBI doesn't guarantee a home

      You'll lose your vote once you accept UBI because it's a system that, over time, demands more delivers less.

      PS: AI isn't taking over and the world is not ending. It sill takes an army of workers to build an iPhone...
  • The delusion of the age of AI has begun though. It will be the errors that AI will always make that will make it a problem, and a destructive and dangerous one. There will always be a nagging uncertainty that comes from not ever really knowing if the AI is actually making an error in any particular circumstance because it will never be clear how it came to its decision.
  • Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English....

    They said the same thing with the introduction of COBOL.

    The flaw with the idea is the same flaw the military tries to make their officers aware of with an exercise: the officer has to write a set of orders to carry out a mission. Those orders are then handed to a unit to carry out, that unit having been instructed to figure out every way they can sabotage the mission and cause it to fail while still following all the orders they were given. If you can't write a clear, unambiguous set of instructions for the computer to follow then it doesn't matter how you give them to the computer, things will go sideways when it doesn't do what you thought it was going to do.

    • Re:

      Indeed. And that is why, except for really generic things, "natural language" does not cut it. Natural language only works somewhat well, if the target is an (A)GI that understands enough to ask intelligent questions when things were not clear. To be fair, many humans fail at this as well.

    • Re:

      we had an exercise like that in a college english course - make something out of legos, then write instructions to tell someone else how to do it. I still don't see how they made what they did from our instructions, but that wasn't even malicious.

  • Which makes Bill Gates one mediocre futurist. Why on earth would I be "writing" a request to my computer? And why in "plain" English? AI is progressing towards symbolically deciphering languages. Combine that with some rudimentary form of language interpretation, I won't be typing out requests in English, I'll be verbally communicating my request to the computer, in whatever native language I was raised to speak.

  • Are configured and controlled and owned like a smartphone (controlled by Apple or Google) then NO! I refuse to have one
  • Jesus butt effing Christ, Bill.. just take the W and retire. Why do you keep commenting on current events.

  • How many of his past predictions have come true? So far, I can't recall any. Of course, if you keep prediction everything, once in a while you will be right. Another question. Is there anyone saying that the age of AI hasn't arrived? It is a fuzzy subject where you can easily declare you were right either way.

    • Re:

      The "arrival" of the age of AI depends very much on what you mean by AI. AGI is not here and it is unclear whether it ever will be. Statistical classifiers fitted with some gadgets were already a thing back when I studied CS, some 35 years ago. The only breakthrough in the current hype is an written natural language interface that works reasonably well for average people, the actual knowledge retrieval process behind this is atrociously bad. AI that routinely "hallucinates" is about as useful as a human kno

  • Bill also added "And we'll know the AI is advanced, when it learns to cut off a competitor's air supply, establish monopolies, and cheat consumers!" He then began stuttering and asked someone to change his battery.
  • This is the same guy who entirely missed the internet.

    His two big moments in tech were GUI and AI?

    Really? There were GUIs in sci-fi tv shows since he was at Harvard and still wondered what a naked woman looked like. And AI is about as old. It isn't a new breakthrough technology.

    You want tech that has changed the world? The internet, smart phones and Wi-Fi.

    Everyone and anyone for a few bucks can get a shitty phone and access the world's knowledge and communicate instantly with billions of people from mos

    • Re:

      Exactly. The real revolution was getting electronic data communication and access and publishing your own ideas to everybody and that means the Internet. All the rest is not so much different from what did exist back then. Of course MS took a few decades to catch up and they did it badly.

      Yes, BG is an idiot.

      • Re:

        He didn't miss the Internet. Remember The Road Ahead? What Bill missed was the Web. He believed that the future of the internet was applications, not unlike the hell that exists on Mobile today.

        • Re:

          It was wishful thinking, he was trying to make it happen because Windows was the king of applications, and still is. There's simply not as much software for any other OS as there is for Windows, that's an indisputable fact, and it's been true for decades. In an applications-for-everything world, Windows is king of the desktop. If they hadn't changed the APIs for making apps on Wince three times in short succession, they might have been king of mobile as well. Developers! Developers! Developers! Whoops!

    • Re:

      He didn't miss the internet. He missed the Web. There is a difference. Didn't you read The Road Ahead?

  • Thinking ANYTHING in 1980 was "forerunner of modern operating system" is something only an ignoramus would think. He should learn what came about circa early 60s to early 70s, that's when the various techs of the "modern operating system" were born.

    • Re:

      Typical Microcrap mindset: Ignore everything that already exists, rediscover it badly and without understanding the point, then implement it in a screwed-up way.

  • The current hype is by far not the breakthrough everybody without actual AI knowledge thinks it is. It is an incremental step. And not a large one. Any of those predictions will take several decades at the very least to happen.

    • Re:

      There will never be true AI. There will however be idiots that connect these current "AI" models to infrastructure causing all sorts of chaos.
    • Re:

      Very true.

      Just don't think that we can get to where "those predictions" are by the slow accumulation of incremental improvements. No matter how good you get at making ladders, you're never going to reach the moon.

  • "Predicts" is a less than ideal descriptor for sentences in the past tense.
  • What about the Internet Bill? Or did you leave that out b/c you didn't see it coming and it bit you in the ass?

    Also, wrt AI, uhmm, unless you've been living under a rock, file this under Captain Obvious.

  • half a year after everybody had been hyperventilating about AI bots, finally also this guy uses his predictive power. It had been so great in the past... It is just amazing how everybody listens to a dude with a big wallet. Especially if he makes predictions that are platitude.
  • Christ Bill, haven't you done enough to screw the human species already? You've set technology back 25 years or more with your monopolistic bullshit. Now you want to be seen as some sort of prophet? Leave people alone already, you've done enough damage. Your dealings with Jeff have made you cringe-worthy too. Yuck.
  • Sorry, Bill, but chiming in to parrot zeitgeist ain't that.

    AI is not what most people think it is; not even most of the people promoting and/or warning against it.

    It's dangerous like a flood is dangerous. But it's only dangerous like a person is dangerous, if a dangerous person is the one unleashing the flood.
  • The guy who missed the Internet and changed his book to make his predictions seem more on point?

    Apparently, when you're old and rich, stating the obvious can get you headlines. Darn.

    Meanwhile, anyone who's seen the previous cycles of AI knows that there is always a hype, following by overinflated expectations of the future, then the technology matures and people realize it's just another tool and doesn't magically solve all problems, then it becomes a standard tech thing and isn't even called "AI" anymore,


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK