5

NYT: It's the End of Computer Programming As We Know It - Slashdot

 1 year ago
source link: https://developers.slashdot.org/story/23/06/03/1514212/nyt-its-the-end-of-computer-programming-as-we-know-it
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

NYT: It's the End of Computer Programming As We Know It

Catch up on stories from the past week (and beyond) at the Slashdot story archive

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

NYT: It's the End of Computer Programming As We Know It (nytimes.com) 158

Posted by EditorDavid

on Sunday June 04, 2023 @03:34AM from the and-I-feel-fine dept.

Long-time Slashdot theodp writes: Writing for the masses in It's the End of Computer Programming as We Know It. (And I Feel Fine.), NY Times opinion columnist Farhad Manjoo explains that while A.I. might not spell the end of programming ("the world will still need people with advanced coding skills"), it could mark the beginning of a new kind of programming — "one that doesn't require us to learn code but instead transforms human-language instructions into software."



"Wasn't coding supposed to be one of the can't-miss careers of the digital age?," Manjoo asks. "In the decades since I puttered around with my [ZX] Spectrum, computer programming grew from a nerdy hobby into a vocational near-imperative, the one skill to acquire to survive technological dislocation, no matter how absurd or callous-sounding the advice. Joe Biden told coal miners: Learn to code! Twitter trolls told laid-off journalists: Learn to code! Tim Cook told French kids: Apprenez à programmer! Programming might still be a worthwhile skill to learn, if only as an intellectual exercise, but it would have been silly to think of it as an endeavor insulated from the very automation it was enabling. Over much of the history of computing, coding has been on a path toward increasing simplicity."



In closing, Manjoo notes that A.I. has alleviated one of his worries (one shared by President Obama). "I've tried to introduce my two kids to programming the way my dad did for me, but both found it a snooze. Their disinterest in coding has been one of my disappointments as a father, not to mention a source of anxiety that they could be out of step with the future. (I live in Silicon Valley, where kids seem to learn to code before they learn to read.) But now I'm a bit less worried. By the time they're looking for careers, coding might be as antiquated as my first PC."



Btw, there are lots of comments — 700+ and counting — on Manjoo's column from programming types and others on whether reports of programming's death are greatly exaggerated.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

  • "It's the End of Computer Programming as We Know It"

    "This won't necessarily be terrible for computer programmers - the world will still neeed people with advanced coding skills"

    Talk about hedging your bets.

    • Also: "one that doesn't require us to learn code but instead transforms human-language instructions into software"

      Isn't that what code does ?

      • Programming languages are "human languages" only in that they are created by humans, but their purpose is for speaking to computers. They're human-computer languages.

        The idea that we're near the point where we can ask the software to write us a program from a natural human language is dumb, though. You might be able to do it, and if what you asked for is so simple it's commonly used as a teaching example you might even get a properly working program out. But for any other case, a trained programmer is going to have to not only fix problems with the produced program, but actually find out what the problems are. The same user who can't write program code can't imagine reasonable test cases for it.

        • Re:

          So, all in all, we use code to "transforms human-language instructions into software". Exactly what the article says we should do in order to avoid coding...

        • If you where to write code in a normal human language it would read like a legal document. We are just not very specific in our normal usage of the language and a computer needs the specificity. Honestly, I think it would be harder to understand writing English to be specific enough for a computer vs learning a more narrower scoped language designed for the problem. Sometimes I wonder why the legal profession has not done that also.

          • Sometimes I wonder why the legal profession has not done that also.

            It's interesting because the whole legal system is set up in many countries basically completely in reverse to how it should be. Someone (or a group) writes a law, passes the law, and then judges and lawyers have to interpret it after the fact to figure out exactly what it means and how it should be applied. There is no formal logic and rigor applied to writing laws. Often it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated.

            • Re:

              "it seems like they leave gaps in the laws specifically so that laws can be circumvented or manipulated"

              I agree. I don't know if it is intentional or not, but it would be nearly impossible to write laws without doing this. You could obviously write the law, but if it were written so that it should not be interpreted at all, people like me would find loop holes from it and some would take advantage of those. Currently even if you find a loop hole from the law, you can't abuse it, because judges will decide t

            • Re:

              In the past, judges were essentially the law, and none were trained. Law schools for having trained lawyers is relatively new within the last 2 or 3 centuries and the idea that judges should also have law training is relatively new as well. Complex and finely detailed laws are much newer than that, and we still see legislators muck it all up even in 2023 (the last refuge of those with no applicable job skills).

        • Or as Brian Kernighan put it, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

        • Replace "trained programmer" with "sufficiently advanced AI". Assuming that humanity doesn't get destroyed somehow & technological progress continues, it's only a matter of time. Then what do you get?

          Ordinary people assisted by their own 'uber-intelligent personal assistent' (or a 'team' of those) that understands every programming language in existence, besides countless other areas of expertise. Ready to chew on any problem that human can formulate somehow.

          "Steal my bosses

        • Re:

          "transforms human-language instructions into software"

          "They're human-computer languages."

          What is the difference?

        • Re:

          Reminds me a big of Prolog. Novices or those who just read half a page, would proclaim: You don't tell a computer what to do, you tell it what you want! To which actual Prolog programmers would scoff.

          Every few years (or months) there's a "is this the end of cmputer programming?" trend. There was the report generator languages. There's visual programming, just drag boxes on the screen, select options, etc. There's rapid prototyping (just ship the proto). Visual Basic and the ilk. UML drawing tools where

      • Re:

        I would like them to try. Ask the current chatbots a question and the exact same phrasing will give different answers.

        So our language is not specific enough to convey the information needed to deterministically program a computer with the help of transformationa AI. To solve this we need to create a subset of a natural language. Voilá! We have Yet Another Programming Language for AIs (YAPL-AI).

        And even if you could achieve this, what about debugging? If a human is to correct hard to find errors, t

      • Re:

        No. That is what a compiler does. It takes "human-language instructions" composed in a very ordered, structured, and restricted human-langauage and turns those instructions into "software". Or "firmware". Or whatever ware you want to call it depending on where the resultant software is stored. If it is stored in read-only storage, then it is "firmware" (because it is firm). If it is stored on ephemeral storage (read/write) then it is called "software". If it is merely a statistical analysis and does

      • Re:

        Human language == imprecise. AI code from human language == massively imprecise. AI code from human language that no one scrutinizes, verifies, and tests == AImageddon.

    • Re:

      What concerns me is how people will obtain those advanced coding skills, say twenty years after we've eliminated "junior programmer" as a position? Who is going to be able to write a correct prompt to the AI and then check the output for correctness?

    • Re:

      It's 2023. And you're probably still banging away on a plastic box full of keys in order to type a response into a computer. Just like nerds swore would be obsolete "soon"...back in the 90s. Only difference today is you'll spend a premium for a keyboard that's ironically "retro".

      When reading headlines today, one hardly has to hedge the bet that you're reading content created somewhere between hype and bullshit, for profit. Six months from now we'll be reading about the amazing programming industry being

      • About 15 years ago there was a round of editorials all over the place saying schools would do better dropping abstract math like algebra and trigonometry and focusing on practical math like statistics.

        I'm sure there were others before my time. Both-sidesing the bleeding obvious is as old as time.

  • "Computer Programming As We Know It" - I would ask the columnist to define computer programming... as he knows it.

    • I'd love to see him trying to make AI generate the code for a web browser or a word processor...

      • Re:

        After that lets see him use AI to find the logic bug in a 10,000 line program.

      • Re:

        I have a more simple request. Ask it to write a bubble sort in BASIC programming language. I'd *love* to see what it comes up with!

        • Something like that would be trivially easy to get right. Just look up an existing solution on the web. No professional programmer ever writes their own sorting code anyway because every programming environment already has this included, and writing your own code would just possibly result in more bugs, or at worst, just wasted time. The whole reason why AI would fail at writing code in the modern world is that most of the problems programmers are solving is writing new stuff that hasn't been written already, and that the people asking for the code have no way of giving enough detail to get a working result out of it. Programmers are basically intermediaries between people who don't know how to write speficications and the computer. They have to read between the lines to determine what the customer actually wants and be able to see problems with their logic and tell them why what they are asking for doesn't even make logical sense. If you just blindly implement what people ask for, you will end up with an un-working mess or something that has now logical consistency.

        • Re:

          Me > write a bubble sort in BASIC programming language
          ChatGPT > Here's a basic implementation:
          10 DIM A(10)
          20 FOR I = 1 TO 10
          30 INPUT "Enter a number: ", A(I)
          40 NEXT I
          50
          60 FOR I = 1 TO 9
          70 FOR J = 1 TO 10 - I
          80 IF A(J) > A(J + 1) THEN SWAP A(J), A(J + 1)
          90 NEXT J
          100 NEXT I
          110
          120 PRINT "Sorted numbers:"
          130 FOR I = 1 TO 10
          140 PRINT A(I)
          150 NEXT I

        • Re:

          Here's what ChatGPT came up with:


          10 REM Bubble Sort
          20 DIM nums(10)
          30 PRINT "Enter 10 numbers:"
          40 FOR i = 1 TO 10
          50 INPUT nums(i)
          60 NEXT i
          70
          80 FOR pass = 1 TO 9
          90 FOR i = 1 TO 10 - pass
          100 IF nums(i) > nums(i + 1) THEN
          110 temp = nums(i)
          120 nums(i) = nums(i + 1)
          130 nums(i + 1) = temp
          140 END IF
          150 NEXT i
          160 NEXT pass
          170
          180 PRINT "Sorted numbers:"
          190 FOR i = 1 TO 10
          200 PRINT nums(i)
          210 NEXT i
          220
          230 END

    • The world hates programmers and has been trying to eliminate the position ever since it was created. So, everyone is super-eager to pronounce the end of programming. Too eager. We simply aren't there yet.

      When we do actually get there, it won't be the end of JUST programming. It will be the end of all knowledge-worker fields. AI that can truly think both critically and creatively will be able to out-do humans at everything that involves thinking.

      We will know this day has truly arrived when I can tell a

  • If we can get an AI to work on writing an AI for that, I'm all for it!

  • NYT: It's the End of Computer Programming As We Know It

    R.E.M. eventually came up with catchier lyrics...

  • The hardest part of programming, at times, is figuring out how to translate customer requirements into what they ACTUALLY want. AI is not gonna be able to do this for a good while.
    • Re:

      you mean: customers who have lived on a diet of smartphones and facebook their entire lives are going to be just as incapable of clearly expressing their requirements in ways that can be understood, regardless of whether that's a human or an AI doing the "coding" [1]? don't worry: ChatBots - the ones that have no consciousness and no real emotions but can perform the sleep-walking-task of regurgiating predictive-text answers - will fantasise better customer requirements unconnected to reality for them out o

      • Re:

        "Coding" is what we imagined we were doing back when I was sixteen. Going on thirty years later, the industry hasn't grown more mature, certainly not mentally. But there's more people having wet dreams about "coding" now.

        But then, you can see problems everywhere. The use of "hacker" as something to do with computer security (not us, guv!), the use of "bug" to mean "defect" (not our fault, guv!), and so on, and so forth.

        More broadly but related: Training is what you do to dogs. The carrot-and-stick approac

    • Re:

      It doesn't matter how clear and precise the requirements are stated, a modern LLM is simply not capable of producing code that meets them. They just don't work that way, as I and countless others have endlessly explained. It's amazing anyone still believes that fiction.

      Apparently, it's going to take a high-profile failure like the lawyer thing to debunk that particular myth.

      • Re:

        Yeah, the lawyer thing would actually have to happen in a significant case. When discussing the lawyer case at a gathering with an executive that is gung-ho that AI will replace all the programmers and lawyers and such, his perspective was that the lawyer just didn't put into the prompts something like "while making sure not to make up cases that didn't exist". He thinks that the AI just needs to be told not to hallucinate and the problem is solved. That's why he will be a massively successive executive in

    • Re:

      On the contrary, I believe. The trained LLM:s generally have material and capacity to derive better than a programmer what a customer actually means and what is implied in the loose requirements.

      What is missing is having the algorithms actually asking the customers to fill in the blanks or ambiguities instead of hallucinating or "guessing".

      • Re:

        There's no way it is *better* than a human at understanding human requests. I suppose if they asked 'provide the next number in the Fibonacci sequence
          from a given position', the human might have to search Fibonacci sequence real quick and the LLM might skip straight to spitting up an implementation that it ingested from stack overflow, but that represents a trivial slice of deriving the requirements.

    • Re:

      If it's Ford, which has 7 managers at the same level, chaos dictates the requirements. Seriously, a company I used to work for called them the seven headed monster. We'd get seven sets of requirements and most of them contradicted the others. Fun times. I got laid off (along with everyone at my level), so go fuck yourselves! I wish you the worst of luck. Ford, sorry, I bought a car of yours - nothing against my former employer, it was the best car available at the time (2014).

    • Re:

      That is not a problem if AI can generate code instantly. You feed it customer requirements and let customers try what they got. If it is not what they wanted, they just change the requirements and try again.

      Bigger issue is to make AI that can actually write code as well as a good programmer, or even a bad one.

      • Re:

        And they make about half as much.

        Yeah, no. If you can reliably decipher the customer wants and needs, display it back to them in a form they can understand and equally reliably show them better ways of approaching their needs, you'll do very well. Did that for decades and made very good money. Typically more than their programming staff.

        You do understand that parodies are gross exaggerations and "the dude" is no more real than the "I can code up a sophisticated, never before done interface in an afte

        • Re:

          That scheme also works in the controls industry. When I was in that industry, we would have a meeting with customers so they could explain what they wanted. They were pretty much incapable of writing it down much less expressing themselves semiformally. I'd take notes and then write up what I thought they said. Then I would send it back to them telling them they were free to change anything especially where I got something incorrect. I'd get back an edited version that was not heavily edited. If I agreed an

      • Re:

        I've dealt occasionally with what is supposed to be that, and generally they are a waste of time and do nothing of value.

        They take a clear requirement that is easy enough to follow, and stall things while they prove their value by 'lawyering it up' and making a succinct clear requirement into a verbose hard to follow mess. Then, the programmers go back to the original stakeholder and ask 'what was it you wanted again, I can't follow the 'processed' requirement?' and get that original succinct requirement an

  • Coding in general was never increasing simplicity. One coder on one piece of code, sometimes yes. But as a whole it has increased complexity faster and faster each 5 years.
    • Re:

      This line was the biggest unsupported assertion in the article (and there were several).
      My experience has been that as people work to "simplify" coding, coders are tasked with handling increasingly complex tasks. Overall, my job has maintained its complexity.
      (And yes, this is anecdotal and not supported, but I'm writing a slashdot comment not an article for NYT.)

    • Re:

      You want simplicity? I give you, COBOL [imgur.com].

      • Re:

        COBOL doesn't simplify a problem, as I'm sure you recognize. I suppose a programming language can make something complicated, but that means it is probably the wrong language or the language is very close to operating environment that it necessarily includes those peculiar features.

    • Re:

      The coding ecosystem has been about increasing simplicity.

      With assembly, you had to meticulously describe instruction by instruction what to do.

      Then C provides short hand and a compiler free to select alternative interpretations of that code to assembly to get better performance than a naive translation. However, you are still very carefully managing type, exactly how memory is laid out, allocating memory from heap or using it on the stack, and tediously describing various array lengths across many function

  • The end of programming will come one day—along with the paperless office and the year of Linux on the desktop.

    • Re:

      Not sure about your examples... How much paper do you see in offices (except in bathrooms, and I hope we get to keep it there until I learn how to use the three shells) nowadays compared to 25 years ago?

      • Re:

        Sounds like Zeno's Dichotomy Paradox to me.

        Each 25 years the amount of paper in offices is reduced by half...

    • Re:

      unlike the other examples, there is financial incentive to replace you with software that amounts to less than cheap chinese labor. Sure unions will step in and insist on some humans in the factories, so to speak. But automation always appeals to the producers as a way not to pay employees. Take chipmaking. It left this country because teenagers in china can do it for $5/day in equivalent pay. Now there is talks about bringing back chip making to the USA, but do not be mistaken that that equates to all thos
  • I wonder how the journalist imagines that AI will be coded.

  • There's been 40 years of 'AI replacing programmers'. But an 'AI' has no clue what you actually need to do - what your constraints and circumstances are for this project. Because it is in no way intelligent. An 'AI' as we know it is just a machine that probablistically turns tokens into the next tokens based on what it's previously seen. It can only output things it's seen before, blindly. It is perfectly happy telling you to do a bubble sort for 50 million records. It's basically that outsourced guy searching stackexchange and copypasting code snippets together till they compile and calling it good.

    Software engineering is actually a harder large problem than something like driving. Yes, driving often has nastier consequences for failure, but the solution space is much more constrained. You know what to do, you just have to execute properly, which means determining your route then following it without running into anything or violating laws. You can reduce it to route finding, then small decisions. But with software engineering, if someone just tells you 'I need to do X' you have a staggering array of options: what language? what OS? what hardware? which algorithms? what data structures? what libraries? parallel processing or not? do I need a web interface? how about data integrity and security? backup? cloud? An 'AI' has absolutely zero concern about any of that, because it's not intelligent and will spit out the easiest possible solution the compiles (like that outsourced programmer using stackexchange).

    An engineer takes all the requirements and tradeoffs and decides on the optimal solution, which can change wildly given all the constraints and requirements. There is no single best solution for all circumstances. For instance, which sort you use, or which lookup, are/highly/ dependent on the data and the needs. Maybe it's mostly or completely in order, maybe it's not. Maybe you can hold it all in RAM, maybe you can't. Maybe there's a best way to index it given what you know about the dataset. An AI has zero clue about any of this.

    A code pig ('goofus') is someone who gets told to write code to do X and has no clue about what they're doing in context. They're called 'code pigs' because they're just in their cubicles, rarely let out, and just kind of wallowing in the poop - the classic Microsoft programmer (or any other large corporate drone). 'Programmers' is the more polite term. Most people called 'software engineers' are not, they're just programmers with title inflation. These people could possibly eventually replaced by coding 'AI'. The software engineer will meticulously construct a prompt for a single method (as far as you can trust it) and the coding AI might produce some decent code for that method by plagiarizing some code it's already seen from a github repository. And then the software engineer will need to check it, but it still might be faster than dealing with a code pig.

    But there is no way that a coding 'AI' (which has no intelligence) can possibly replace an engineer unless the AI actually becomes generally intelligent... and then all bets are off for everthing! The current batch of coding 'AI's could be convenient autocompletes for small sections of code, like GitHub's copilot is (but again, you have to check its output, about 3/4 of it is defective without tweaks). So again, for someone who knows what they're doing, it will be a tool they can use or not use.

    • Re:

    • Re:

      I think the best an AI could do is to cut-and-paste from code examples. I seriously doubt it can exhibit the creativity needed to write original code. Not that I haven't done cut-and-paste myself, but I DO write original code as well (and, wow, have I made original mistakes!)

  • A lawyer friend once explained to me that there were references in law books to laws that were no longer on the books.

    I replied that this was just like a "use after free" error in a C program.

    While there may be fewer programmers in the years to come, and a lot of simple cases will be automated, we'll still need auditors.

    The law is full of specialized jargon, much like computer code, and I suspect that if we replace programming with English, it will soon become a specialized language much like law, which is code!

  • by rally2xs ( 1093023 ) on Sunday June 04, 2023 @05:05AM (#63574793)

    While this looks like a serious threat to an entire way of life for millions of people employed in "things software", I might be just in time for some other problems that are not currently solvable on a practical basis.

    There are probably billions of lines of code written in obsolete languages like COBOL and even very specialized languages in military computers that force the use of ancient computers that load from maybe paper tape, magnetic tape, floppy disks, etc which everyone would really love to replace, but rewriting all that code, and possibly even more expensive, testing it, to target, say, Intel chips is just prohibitive.

    Having AI that can look at the machine language of a computer, and spew human readable code complete with comments of not only what its doing but why, will, if it can be produced, be a huge leap in taming the expense of replacing ancient computers with newer things, As it stands, the cars we're buying today have probably millions of lines of software that target very specific CPU's that are going to make 30 year old cars into automotive bricks. Even if we can dispense with the computers targeting the operation of internal combustion engines, other software doing other highly indespensable things like air conditioning and heating and navigation and so forth because they cannot be replaced once highly specific computers with millions of lines of software can't be replaced for reasonable expense. It would be hideously expensive. I'm going to be involved with a road rally called "Great Race" in a couple weeks which is a rolling old car museum of stunningly well restored cars from the beginning of cars all the way up to the 1970's. All those cars still run the way they used to, and are completely serviceable even if the brakes may be scary and the acceleration is measured with a calendar. But they will leave St. Augustine, Florida and arrive in Colorado Springs, Co the following week just like they would have 50 - 100 years ago. But in the future, there's not likely to be cars up into the age of computers, since finding parts that work after the relatively fragile silicon components have released the little packet of smoke that are all built into them at the factory, and cease to function.

    But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer, maybe billions of dollars of otherwise completely serviceable vehicles could be made to continue to run for reasonable expense.

    So if your nuclear submarine is still storing and loading its software with floppies, maybe it could be updated to load from more contemporary sources if the input devices could be replaced with commonly available (translated cheap) mass produced devices, and the software would be provably correct every time.

    I think we desperately need this whether we realize it yet or not.

    • There are probably billions of lines of code written in obsolete languages like COBOL

      COBOL is far from obsolete. The world still runs on COBOL, and for good reason.

      But in the future, there's not likely to be cars up into the age of computers

      There are already open source (hardware and software) EMS / ECU systems produced by the hobbyist community.

      But if we could grab a Raspberry Pi from the shelf, and have an AI translator that could look at the old machine code produced by a compiler that no longer runs on any existent computer, and produced code for the Raspberry Pi to produce the same outputs as the old automotive computer

      We already have technology that transforms programs. We call them "compilers". They're very useful, but probably not what you actually want. Writing cross-compilers is notoriously difficult, for reasons that should be obvious.

      A much better, and far simpler, approach is emulation. The big advantage here is that you won't need to change anything about the original program. We actually have mainframe emulators in use today, keeping older software in production.

      While emulation is obviously the right choice, either solution is going to produce better results cheaper, faster, and more reliably than an AI. Just try to imagine what would go into training such a system. Writing the cross-compiler would be less work, and you'd probably want to write an emulator as part of that process anyway. To top it off, you couldn't even trust any of the code it produced in the end. AI is just the wrong tool for the job.

      [...] and the software would be provably correct every time.

      What does "provably correct" mean to you? Also, cheap commodity hardware maybe be unsuitable for some environments. You can't just stick a $25 SBC anywhere you want and expect to it to be as reliable as the hardware it ostensibly replaces just because it's newer.

      • Re:

        COBOL is far from obsolete. The world still runs on COBOL, and for good reason.

        As I indicated [imgur.com] a bit further up.

    • Re:

      This, this, is why a golden age of automobile collecting is limited, finite, and will be static soon.

      You can buy most any part you need for a '67 Mustang, even door handles, likewise many 50s-60s-70s cars, they are unique and desirable, and so collectible.

      Will you be able to buy replacement dashboards for any of the modern automobiles so heavily computerized? Will the software be rewritten for the available hardware 30 years from now, accommodating a necessary conversion to electric drivetrains (hello, mand

    • Re:

      The rub is in "and the software would be provably correct every time".

      Provability is very much not a characteristic of how AIs operate -- just the opposite, in fact. Their main problem is that their results are unreliable, and nobody has the first clue about how they derived them.

      The reason the nuclear submarine is still running decades old code in an obsolete language is because the Navy's foremost programming experts don't trust themselves to rewrite it without making some mistake and getting someone kil

      • Re:

        " don't trust themselves to rewrite it without making some mistake and getting someone killed"

        Nobody should:"trust themselves" with software. Rewriting it means testing it 6 ways from Sunday so's you don't get someone killed. Testing is very expensive, and is money that can be saved by buying more floppies. $10 million to test the rewritten software, or $10K to buy more floppies.

    • Re:

      A reason they might still use floppies is that despite the baroque and ostensibly fragile mechanisms, they are actually shockingly reliable if you use formats chosen for the purpose. Flash memory reliability is sketchy. A HD 5.25" or 3.5" floppy disk formatted to around 180kB/side (and preferably used single sided) can reliably store data for absurdly long periods of time. An 8" floppy is even better, but those are unwieldy.

  • Lotsa luck! This is what, the three hundred and fifty seventh thing that was going to let MBA's give fuzzy and incomplete ideas to a piece of software and it would magically crank out bug free software?

    While ChatGPT is clearly more sophisticated, all of this reminds me of people reacting to Eliza many years ago.

    In order for ChatGPT to successfully produce any program on it's own (rather than just cut-pasting stackoverflow.com) you would have to tell it what to code in English at a fine grained level. So fin

  • "Over much of the history of computing, coding has been on a path toward increasing simplicity."

    Perhaps, but problems got more complex. In my 35-year career I went from desktop applications that use 50 80-character lines for display and stored data on a single dedicated server with a 30 megabyte hard drive.

    My current project uses React with hooks, Node with servless functions on a web hosting service, lots of fancy CSS, a NoSQL database hosted elsewhere with an API in GraphQL and libraries, libraries, libraries written by 3rd parties, constantly being updated.

    None of this was even imaginable 15 years ago. Do we have better applications? Yes, much better. Are they simpler to write than those 30 years ago? Uh...nope.

    • Re:

      Yeah, in what world has programming got simpler?

      If anything, it's become a guild where the gatekeepers deliberately make it as complex as possible by grabbing at every new idea and library they can.

    • Re:

      The same application you wrote 30 years ago is simpler to make, if implementing the same UI and general design.

      I'd still posit that it's even easier to make a nice modern looking interpretation of that same application than it was to make that program back in the day. The choices can be paralyzing (am I using Angular? React? Svelte? Vue?) and some of the hyped 'patterns' are frequently not what they are cracked up to be, but peer pressure causes them to be over implemented... However once you understand t

  • Not too sure about programming. But I bet current AI can do a great job writing speculative misinformed clickbait to fill pages, better than most NY writers. I think someone's got to be worried.

  • Just because something is called "Artificial Intelligence" does not mean that it is intelligent.

    The history of "AI" is a sequence of ever changing definitions of what consists of intelligent activity. In the 1950's it was assumed that playing checkers or chess showed intelligence. In the 60's the ability to do freshman calculus was the test. After that came natural language parsing, which was conflated with language understanding. By the 80's it was expert systems and Prolog. In the 90's robots became the rage. The 2000's had the start of autonomous vehicles and early neural nets, and by mid to late 2010's we ended up with high end ANN and now LLM.

    The examples are not comprehensive or exhaustive and they show that the definition of AI is always changing There is, however, a ongoing pattern: when a particular AI definition/fad fails a new definition comes into fashion and is the Next Big Breakthrough. And in each cycle the hype is more inflated and the money pumped into the technology goes up accordingly. That's the true driving force. Hype and big bucks go hand in hand.

    • Re:

      There is, however, a ongoing pattern: when a particular AI definition/fad fails

      Failed? Computers can now play chequers an chess (jury's still out on go!) far, far better than any human. In other words, something which required intelligence can now be done with artifice. I wonder what a good term for that would be?

  • Normally I receive a safety design that has been approved by the customer and the equipment vendor. It's a document that says in a formal way "when these things happen or when a person or object is in this area then that equipment should stop".

    The safety programming is the simplest and easiest type of programming in these systems. It has to be that way because it's very important that it must be right. The spec is very clearly defined, the safety devices are very simple and very reliable, and there are strict rules for how the logic must be written.

    Let's say chat GPT is approved for safety code generation. The project manager fires me and just hand that safety spec to chat GPT.

    Still there always a instances of "oh gee whiz they never thought somebody would do this when they came up with the spec better make sure they can't get hurt when they do". Some of those are things you could figure out sitting at your desk. Some of them are only obvious when you get to site and see the physical layout of the system and ways people could climb over or under or around to get where you don't expect them to be. Let's put these edge cases aside for the moment and focus on the primary issue:

    Chat GPT is famous for generating output that looks right but is factually wrong. It doesn't understand the intent of what it's being asked to do. It doesn't understand anything; that's not even remotely how it works. So I'd expect the safety program that passes validation but does unexpected things during production.

    When somebody is hurt because safety system was programmed incorrectly who will pay them or their surviving family?

    The design committee did their job correctly; the safety spec was valid. The project manager used an approved AI to generate the code. The AI was certified to be compliant with regulatory standards OSHA NFPA etc. The equipment vendor supplied safety devices certified in the same manner. The operator followed safety rules when operating the equipment.

    Somebody got hurt and no one is accountable. I realize in the boardroom this is a feature not a bug but on the shop floor it is not a good feature.

    All the same arguments about safety can be made about any programmatic output that people actually care about. Factory equipment safety failures happen to be a low probability high stakes example.
    If you want higher stakes consider power plant burner control systems. Consider petrochemical refinery controls. Medical device and Drug Manufacturing.

    I remember when Safety Systems had to be hardwired. No programmatic involvement in safety allowed. Mechanical switches and relays because software was just not reliable.

    AI is not yet reliable enough to be trusted with safety or process control.
    Not yet.

    • Re:

      So are people. Ask any teacher or maintenance programmer. Watch any trivia show. People answer incorrectly all the time.

      I find it strange people believe there must always be responsibility for failure. Nothing is ever assured, there is always risk in everything. 0 risk is a counterproductive, unreal unachievable concept. The goal is always seeking an acceptable level of risk over a defined set of constraints as determined by politics and policy.

      I suspect in the near future we will see automation invo

  • Long time ago there was a movement to explain what the computer should do, in more or less plain English instead of mysterious codes. It was called COBOL. It, and other high-level languages of that time did indeed change coding a lot. But the need for programmers did not go away, at all.

    The real art of programming includes being aware of different failure modes, error handling, and considering malicious user input, as well as a deep understanding of what the program is supposed to do, and finding an accepta

  • ...killed by programmers. This is what will happen with AI, hoping that this will bring back serious journalism, as we knew it before.
  • Basically, the idea of intelligent/expert system compilers that can generate code from highly abstract descriptions is fifth generation programming. This has been talked about for as long as I've been a programmer (I started in 1978), and I seriously doubt ChatGPT is at the point where it could implement it usefully. As far as I can tell, code produced by AI systems tends to be of very poor quality (bug-ridden, unreliable, with tons of security defects). Of course, that won't stop companies using ChatGPT co

    • Re:

      To further the point, a lot of folks I've talked to if they do admit that ChatGPT isn't there yet, they will declare "oh but it came out of nowhere and is like 80% there, so it'll be a done deal in the next year or two".

      Which ignores the fact that it didn't come out of nowhere, I point out that over a decade ago IBM demonstrated almost this level of functionality on Jeopardy. However, despite that demonstration, they weren't able to translate that to business value. Now OpenAI has made it iterative and giv

  • I get the feeling that whatever level of automation happens, there will be programmers working 60 hour weeks. Even as the pool of jobs shrinks, there will be overworked folk still fighting bosses to work remotely or keep a manageable workload!
  • "one that doesn't require us to learn code but instead transforms human-language instructions into software."

    Transforming human-language instructions into software. Didn't that used to be know as pseudocode?
  • I agree. I have always thought that it was bizarre to teach children "coding". To me, that was like someone in 1920 teaching kids to use a phone switchboard: "It's the future!"

    I mean, we all saw how Captain Kirk talked to the computer, and it was able to act on his instructions. And I recall episodes of both Superman (the original TV series) and The Outer Limits (original) in which people spoke to computers, and the computers understood and acted.

    So did we not see that programming would be something interim

  • .. using all new code.

    (hit enter, wait)
    (wait)

    What comes next?
    • Re:

      Presumably, it takes everything it has tagged as 'Linux' and plagiarizes the hell out of it while also scrambling it a bit to obfuscate the origins.

      This will result in something that won't boot, but requires an improbably large ISO to install.

  • You ever do that challenge where you get a few teams with a few Lego blocks and one team member tries to explain a diagram of a shape to make whilst the rest of the team try to assemble the pieces? And even with a few pieces, the shapes end up being radically different between teams? If we do get rid of programming, we need to make Business Analysts a whole lot better!
  • Over the years I've coded with PL-1, Fortran, BASIC and Pascal. Then I tried to learn Java. I'm a chemist, not a programmer, so it was never full time. I just never could get around the complexity of Java. I can't see how adding AI into the mix is going to make it simpler. You still have to figure what you want the code to do and you have to check to see if AI did it right. Given ChatGPT's tendency to lie, I don't know that I would trust it.
  • Symbolic maths will be the arbiter of determining AI compositional programming success. Any machine that can grok symbolism at higher mathematics level - ends human drudgery.

  • Or how did the whole "no code" thing turn out?

    Sorry. AI isn't the panacea of all things. This guy just made a click-bait article.

    AI cannot "create" it can "generate" and this is a significant difference. It can only generate based on what it's trained to do. But if you want something that hasn't been done before, you need a human to work that out.

    AI can be a good "thumbnail" thing to start finding new options that may be already within the pattern but just not seen by the human eye.

    AI can help resolve thing

  • He's describing the transition from what was a 3GL language model for coding phase zero: "unconscious" to the same programming phase, but using using 4GL tools.

    As an aside, I wish I could find the reference to the "N programming phases", where phase zero is "unconscious", i.e. unaware one's actions are programming a machine (e.g. spreadsheet macros).

    Tell the machine what you want, not how to do it. It's an evolution of the language model, just as Lem described in GOLEM XIV/Imaginary Magnitude.

    LLM tech

  • THere are already shitloads of totally useless "coders" out there. Writing programming code is so simple any idiot can do it -- and many idiots do. The hard part, which iwill not be solved by Statistical Modelling (aka ML/AI) is "how" to solve the problem and "what" is to be achieved by the cut'n'paste code.

    It has been this way since the dawn of "making machines do useful stuff". The hard part is designing "how" to accomplish what is desired. Reducing that the instructions that a idiot (or a machine) ca

  • "This article was written by ChatGPT."

  • So even more 'developers' don't know what is going on and how stuff actually work. I like creating applications, I like to know how it works so I can fix stuff if it doesn't. I'm well aware that AI will take my job within a decade or so, and I'm also well aware society isn't ready for so many people without a job.
  • Didn't Jaron Lanier say something along the lines of the AI future basically being a planet of help desks?

  • After years of everyone telling journalists to learn to code when they lose their jobs, the journalists are back to tell programmers to learn to write in plain English when they lose theirs!

  • You still have to collect requirements, determine the design, and specify the design. Once I get to the coding stage I can finally relax. When you have a LLM (large language model), expert system, or AI working for you. You still need to work out what you want to ask it to make. AI will probably accelerate the process. But I suspect it means programmers today will be expected to produce mode software, using AI as a tool, rather than it eliminating their jobs. Why work on one release a month when you can jug

  • The actual goal is not the kind of toy examples we see, like "Write a BASIC program to implement bubble sort." The actual goal is to turn human languages into some sort of declarative programming. Eg, "Write a program that makes my phone a single remote control for my TV and multiple streaming services." I'll be excited when the AI can do that, and then accept additional constraints/goals when the first result is not what the user had in mind. Heck, I'll be excited when the AI can actually start asking
  • Yeah right, the end of programming has been announced quite a few times already, when X first appeared, being X:
    • "low code" platforms
    • 4th generation languages
    • Visual programming
    • Compilers
    • ... (some I missed)

    and now for generative models, yet we're still here.

  • AI has a long way to go before it can be trusted to produce desired code. Practically every request I've made of ChatGPT (including 4.0) does not pass muster. Programming skills are often required to identify deficiencies in generated code, and programming skills are necessary to write adequate prompts. Yes, AI may sometimes be helpful to a programmer, but I've experienced the current AI often posing a hindrance, because it doesn't understand the problem area and composing an adequate, sufficiently detailed prompt--and then evaluating the generated code--isn't worth the time and effort of a seasoned programmer.
    • Re:

      You can use other human languages also, so it doesn't need to be English.

      Fun fact: When I used my native language to write a program with ChatGPT, ChatGPT named some of the variables in English and some in my native language.

      But I agree with what you say. At least on its current level, you can't do any serious work if you don't know how to program. But there is one thing it can do. Translating code from one language to another. E.g. C -> HTML + Javascript.

      I tried with this example:
      https://www.programiz.c [programiz.com]

      • Re:

        the funny thing is, I could teach an AI so many issues I get daily and maybe it could teach me. Unfortunately, a DoD firewall prevents that.

    • Re:

      You sound a lot of like those people who in the year 2015 said that it would take at least a decade for computers to beat professional human players in go. If Google wanted, it could make this AI within 2 years. It would not be perfect, but it could write large applications with millions of lines of code and code quality would be much better than what chatGPT can provide. I estimate 2 years, because that long it has usually took for them to solve unsolvable or impossible problem with AI. (like go or protein


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK