1

Using AI and Machine Learning To Create Software

 1 year ago
source link: https://dzone.com/articles/use-ai-and-machine-learning-to-create-software
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Using AI and Machine Learning To Create Software

I started implementing OpenAI into Magic. It took me no more than two days to teach it Hyperlambda to an accuracy of 90%. What does this mean for our industry?

by

CORE ·

Jan. 12, 23 · Tutorial
Like (1)
885 Views

I've just released a new version of Aista Magic Cloud that allows you to use natural language to create Hyperlambda backend code, and its accuracy is at 90% after only two days of experimenting with the thing. To appreciate this feat, I suggest you watch the following YouTube video.

The way it works is that you start out with a base model. I found Curie to give me the best result. Then you supply OpenAI's API with a bunch of questions and answers. This is the most critical part since the better your questions and answers are, the better your end result will become. I found that extrapolating my questions with variables allowed the model to be able to recognize variables, and substitute these, composing larger blocks of code from smaller blocks more naturally. An example of the above could be, for instance, the following questions, with code snippets solving your questions.

  • Insert company_name and org_no into my customers table in my crm database
  • Insert first_name and last_name into my contacts table in my customers database
  • Insert description into status table in my cms database
  • Etc ...

The above might seem ridiculous for a human being, but by extrapolating similar questions such as the above, OpenAI is able to identify variables that it later can find in your natural text as you ask it to solve specific problems. You can even automate the creation of such learning entities by using scripts. I have even seen people using OpenAI and ChatGPT itself to create training material for OpenAI and ChatGPT. Below is an example of a sentence I asked it after training it.

Create a Hyperlambda endpoint that inserts name, address and zip into my customers table in my crm database

For the above sentence, it produced the following result:

Plain Text
.arguments
   name:string
   address:string
   city:string
   country:string
data.connect:magic
   data.create
      table:customers
      values
         name:x:@.arguments/*/name
         address:x:@.arguments/*/address
         city:x:@.arguments/*/city
         country:x:@.arguments/*/country
response.status.set:201
return
   result:Insert was a success

The above code is 90% correct. If I had done a better job extrapolating my training data, it would probably perform 100% accurately. Notice how it inserts city and country instead of zip code.

Code Is a Classification Problem

Another thing I realized after playing around with my training material is that code is a classification problem — at least, this is true for Hyperlambda and OpenAI. This allowed me to use less resource-intensive engines that are less expensive and faster. For instance, I used Curie with a lot of success, which is smaller, less expensive, and faster than DaVinci. This allowed me to submit my training data to OpenAI for roughly $1 and ask questions for roughly 0.3 cents. I tried using Ada, but she failed miserably. I never tried Babbage, though, which might have worked, and is even less expensive than Curie.

However, the point is that there are a finite amount of correct solutions within a programming language for a specific problem. For Hyperlambda, this is especially true since it's a super high-level programming level, more like a "5th generation programming language" intended to orchestrate data and not for algorithm-intensive tasks. This allowed me to use classification logic as I trained my model, which of course, produces much more accurate results.

Another thing I noticed was that by providing multiple questions to the same snippet of code, I was able to drastically increase the quality, which again is because that code is a classification problem. I had, in total, 691 training snippets; however, there were probably only some 250 different code snippets, implying the same code snippet was reused multiple times for different questions. One thing I intend to change in future versions is to extrapolate much more, allowing Curie to recognize variables more easily in my natural text. Currently, I've got roughly 90% correctly working code. However, due to Hyperlambda's simple syntax, I am certain that I can bring these numbers easily up to 99% accuracy with less than 1,000 training snippets.

All in all, the entire process of implementing this into Magic took me two days, with a couple of days thinking about the problems first. If you want to play around with the thing, you can register for a free 90 days cloudlet below and create your own OpenAI API key.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK