1. Home
  2. Command Line Client
  3. CFP Command Line Client – Adding a Job

CFP Command Line Client – Adding a Job

Adding a job via cfpclient is a three-step process:

  1. create the job
  2. create the source and target node
  3. configure the source and target nodes
These instructions assume that you have already created the systems you will be using for the source and target node.

Creating a job

To create a job via cfpclient, enter the following:
python cfp jobs --add --name 'CFP Job' --type 'single-source-batch'
Successfully added job (type=single-source-batch name=CFP Job) -> id:56b3c4cbc1e9c67b03690b53
+---------+---------------------+--------------------------+-------+-------+---------+
| name | type | id | nodes | valid | state |
+---------+---------------------+--------------------------+-------+-------+---------+
| CFP Job | single-source-batch | 56b3c4cbc1e9c67b03690b53 | None | False | invalid |
+---------+---------------------+--------------------------+-------+-------+---------+

where:

  • --name is the name of your job
  • --type is the type of the job; use “single-source-batch” for all jobs here
You now have a shell for the job; make a note of the job id, which in the example above is 56b3c4cbc1e9c67b03690b53.  The next step is to fill in the shell with the source and target nodes.

Creating a Source and Target node

First, locate the ids of the systems you will use for your source and target nodes with a --list command:
python cfp systems --list+---------------------------------------------+-----------+-----------------------+--------------------------+--------------------------+-------+
| name | supertype | type | id | pop | valid |
+---------------------------------------------+-----------+-----------------------+--------------------------+--------------------------+-------+
| Box | system | box | 569d4689c1e9c66ae622bbe6 | 569d18f3c1e9c66ae622b52f | True |
| Google Drive | system | googledrive | 569e61a2c1e9c66ae622e11e | 569d18f3c1e9c66ae622b52f | True |
| Dropbox | system | dropbox | 569fd3bbc1e9c6510327d5ed | 569d18f3c1e9c66ae622b52f | True |
| Amazon S3 | system | amazonS3 | 569fd384c1e9c6510327d5e5 | 569d18f3c1e9c66ae622b52f | True |
+---------------------------------------------+-----------+-----------------------+--------------------------+--------------------------+-------+

We will enter the Google Drive system (id 569e61a2c1e9c66ae622e11e) for the source:
python cfp nodes --add --job 56b3c4cbc1e9c67b03690b53 --role source --system 569e61a2c1e9c66ae622e11e
Successfully added node (role=source system=569e61a2c1e9c66ae622e11e job=56b3c4cbc1e9c67b03690b53) -> id:56b3c5afc1e9c67b03690b55

and Box (id 569d4689c1e9c66ae622bbe6) for the target of our job:
python cfp nodes --add --job 56b3c4cbc1e9c67b03690b53 --role target --system 569d4689c1e9c66ae622bbe6
Successfully added node (role=target system=569d4689c1e9c66ae622bbe6 job=56b3c4cbc1e9c67b03690b53) -> id:56b3c5eec1e9c67b03690b57

Configuring the Source and Target Nodes

Now, let’s look at the default configuration of the source node with a --get --details command:

python cfp nodes --id 56b3c5afc1e9c67b03690b55 --get --details
[
{
"max_rx_files": 10,
"job": "56b3c4cbc1e9c67b03690b53",
"fields": [...

We want to add a source directory to our job, which we do with this command for one directory:


python cfp nodes --id 56b3c5afc1e9c67b03690b55 --enumerate source_filedir --fields directory=/Users --set
+------+--------+--------------------------+--------------------------+--------------------------+-------+
| type | role | system | job | id | valid |
+------+--------+--------------------------+--------------------------+--------------------------+-------+
| node | source | 569e61a2c1e9c66ae622e11e | 56b3c4cbc1e9c67b03690b53 | 56b3c5afc1e9c67b03690b55 | False |
+------+--------+--------------------------+--------------------------+--------------------------+-------+

If source_filedir is multiple directories, the best way to load them is via an edited csv output, exported from the file chooser in the CFP UI:
python cfp nodes --id 56b3c5afc1e9c67b03690b55 --enumerate source_filedir --csv 'my dirs.csv' --set

Additional useful commands for creating jobs are:

  • --safe controls the validations that are performed; the default is probably best.
  • --fields can be used to specify additional configuration for each node.  The best way to learn the names of the various fields is to look at a node that has already been configured in a similar job.
  • --fieldsep can be used to provide a custom separator for the values in --fields if needed for escaping purposes, etc.  Generally the syntax is --fieldsep=\;
Updated on February 5, 2020

Related Articles