2

Kubeflow deployment: part 2

 3 years ago
source link: http://www.donghao.org/2021/08/20/kubeflow-deployment-part-2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Kubeflow deployment: part 2

Instead of using other strange components of Kubeflow, we can just use Kubeflow Pipelines which is much easier to be deployed and used.

After deployment (less than 20 minutes), I test my first example and it succeeded:

import kfp
from kfp.components import func_to_container_op, InputPath, OutputPath


@func_to_container_op
def do_something(output_csv: OutputPath(str)):
    with open(output_csv, "w") as writer:
        writer.write("name,score\n")
        writer.write("hello,100\n")
        writer.write("world,50\n")


@func_to_container_op
def show_something(input_csv: InputPath()):
    with open(input_csv, "r") as reader:
        for line in reader:
            print(line)


def do_and_show_something():
    csv_data = do_something()
    csv_data.set_memory_limit("1G")
    csv_data.set_cpu_limit("0.5")
    show_op = show_something(csv_data.output)
    show_op.set_memory_limit("1G")
    show_op.set_cpu_limit("0.5")


if __name__ == "__main__":
    kfp.compiler.Compiler().compile(do_and_show_something, __file__ + ".yaml")
Python
xxxxxxxxxx
import kfp
from kfp.components import func_to_container_op, InputPath, OutputPath
@func_to_container_op
def do_something(output_csv: OutputPath(str)):
    with open(output_csv, "w") as writer:
        writer.write("name,score\n")
        writer.write("hello,100\n")
        writer.write("world,50\n")
@func_to_container_op
def show_something(input_csv: InputPath()):
    with open(input_csv, "r") as reader:
        for line in reader:
            print(line)
def do_and_show_something():
    csv_data = do_something()
    csv_data.set_memory_limit("1G")
    csv_data.set_cpu_limit("0.5")
    show_op = show_something(csv_data.output)
    show_op.set_memory_limit("1G")
    show_op.set_cpu_limit("0.5")
if __name__ == "__main__":
    kfp.compiler.Compiler().compile(do_and_show_something, __file__ + ".yaml")

To limit the resource (CPU/Memory), we only need to call the function of the op (Operation).

Then python3 xxx.py to convert the python script file to YAML, and upload the YAML file to Kubeflow Pipelines.

That is how easy to use just Pipeline component of Kubeflow!

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK