1

Building a Generative AI Processor in Python

 7 months ago
source link: https://dzone.com/articles/building-a-generative-ai-processor-in-python
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Building a Generative AI Processor in Python

Why not create a Python Processor for Apache NiFi 2.0.0? In this tutorial, discover whether the challenge to do so is easy or difficult.

by

CORE ·

Jan. 23, 24 · Tutorial
Like (3)
1.4K Views

It was a really snowy day when I started this. I saw the IBM WatsonX Python SDK and realized I needed to wire up my Gen AI Model (LLM) to send my context-augmented prompt from Slack. Why not create a Python Processor for Apache NiFi 2.0.0? I guess that won’t be hard. It was easy!

IBM WatsonXAI has a huge list of powerful foundation models that you can choose from, just don't pick those v1 models as they are going to be removed in a few months. 

After we picked a model I tested it in WatsonX’s Prompt Lab. Then I ported it to a simple Python program. Once that worked I started adding the features like properties and the transform method. That’s it.

Source Code

Here is the link to the source code.

Now we can drop our new LLM calling processor into a flow and use it as any other built-in processor. For example, the Python API requires that Python 3.9+ is available on the machine hosting NiFi.

Package-Level Dependencies

Add to requirements.txt.

Basic Format for the Python Processor

You need to import various things from the nifiapi library. You then set up your class, CallWatsonXAI. You need to include class Java definition and ProcessDetails that include NiFi version, dependencies, a description, and some tags.

class ProcessorDetails:
        version = '0.0.1-SNAPSHOT',
        dependencies = ['pandas']

Define All The Properties For the Processor

You need to set up PropertyDescriptors for each property that include things like a name, description, required, validators, expression_language_scope, and more.

Transform Main Method

Transform Main Method

Here we include the imports needed. You can access properties via context.getProperty. You can then set attributes for outputs as shown via attributes. We then set contentsfor Flow File output. And finally, relationship, which for all guide is success. You should add something to handle errors. I need to add that.

If you need to, redeploy, debug, or fix something. 

While you may delete the entire work directory while NiFi is stopped, doing so may result in NiFi taking significantly longer to startup the next time, as it must source all extensions' dependencies from PyPI, as well as expand all Java extensions' NAR files.

So to deploy it, we just need to copy the Python file to the nifi-2.0.0/python/extensions directory and possibly restart your NiFi server(s). I would start developing locally on your laptop with either a local GitHub build or Docker.

Now that we have written a processor, let's use it in a real-time streaming data pipeline application.

Example Application

Building off our previous application that receives Slack messages, we will take those Slack queries send them against PineCone or Chroma vector databases and take that context and send it along with our call to IBM’s WatsonX AI REST API for Generative AI (LLM).

You can find those previous details here:

NiFi Flow

  1. Listen HTTP: On port 9518/slack; NiFi is a universal REST endpoint
  2. QueryRecord: JSON cleanup
  3. SplitJSON: $.*
  4. EvalJSONPath: Output attribute for $.inputs
  5. QueryChromaCall server on port 9776 using ONNX model, export 25 Rows
  6. QueryRecordJSON->JSON; Limit 1
  7. SplitRecordJSON->JSON; Into 1 row
  8. EvalJSONPathExport the context from$.document
  9. ReplaceTextMake context the new Flow File
  10. UpdateAttributeUpdate inputs
  11. CallWatsonX: Our Python processor to call IBM
  12. SplitRecord1 Record, JSON -> JSON
  13. EvalJSONPathAdd attributes
  14. AttributesToJSONMake a new Flow file from attributes
  15. QueryRecord: Validate JSON
  16. UpdateRecord: Add generated text, inputs, ts, UUID
  17. Kafka Path, PublishKafkaRecord_2_6Send results to Kafka.
  18. Kafka Path, RetryFlowFileIf Apache Kafka send fails, try again.
  19. Slack Path, SplitRecordSplit into 1 record for display.
  20. Slack Path, EvaluateJSONPath: Pull out fields to display.
  21. Slack Path, PutSlack : Send formatted message to #chat group.

This is a full-fledged Retrieval Augmented Generation (RAG) application utilizing ChromaDB. (The NiFi flow can also use Pinecone. I am working on Milvus, SOLR, and OpenSearch next.)

Full-fledged Retrieval Augmented Generation (RAG) application

Enjoy how easy it is to add Python code to your distributed NiFi applications.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK