Use the API
Usage
Graal Platform REST API endpoints can be invoked in any of the standard ways to invoke a RESTful API. This section describes how to use the Graal Platform REST API using cURL as an example.
:::tip Using and Configuring cURL You can download cURL here. Learn how to use and configure cURL here. :::
Authentication
Graal Platform's REST API supports these forms of authentication:
- Basic authentication using your username and password
- Basic authentication using your username and API Key.
- Using an access token instead of a password for basic authentication.
- Using an access token as a bearer token in an authorization header (Authorization: Bearer) with your access token.
Headers
By default, Graal Platform sets several security-related HTTP headers:
X-Ratelimit-Burst-Capacity
indicates to burstCapacity valueX-Ratelimit-Replenish-Rate
indicates to replenishRate valueX-Ratelimit-Remaining
which shows you the number of requests you may send in the next second
Using Graal Platform CLI
Graal Platform CLI is a compact and smart client that provides a simple interface to automate access to our platform. As a wrapper to the REST API, it offers a way to simplify automation scripts making them more readable and easier to maintain. Please note that several of the functions available through the REST API are also available through CLI and you should consider which method best meets your needs.
See Using the command line tool (graalctl)
Examples
via curl
Set common vars:
export URL=https://api.graal.systems/api/v1
export OPTS=()
Add a Bash job...
JOB_ID=$(curl -s "${OPTS[@]}" -d '{
"name": "Demo",
"description": "Demo Bash job",
"timeout_seconds": 3600,
"max_retries": 1,
"parameters": ["world"],
"options": {
"type": "bash",
"docker_image": "spark:3.0.1-jre11-hadoop3.2-azure",
"lines": ["echo \"Hello $1\""]
}
}' -H "Content-Type: application/vnd.graal.systems.v1.job+json;charset=UTF-8" -X POST $URL/jobs | jq -r -c '.id')
...or a Spark job
JOB_ID=$(curl -s "${OPTS[@]}" -d '{
"name": "Demo",
"description": "Demo Spark job",
"timeout_seconds": 3600,
"max_retries": 1,
"parameters": ["--path", "test"],
"schedule": {"type": "once"},
"options": {
"type": "spark",
"main_class_name": "fr.layer4.data.spark.examples.Multi",
"namespace": "default",
"docker_image": "spark:3.0.1-jre11-hadoop3.2-azure",
"file_url": "examples-1.0-SNAPSHOT-uber.jar",
"conf": {
"spark.kubernetes.driverEnv.SPARK_PRINT_LAUNCH_COMMAND": "1",
"spark.executor.instances": "1",
"spark.executor.memory": "512m",
"spark.driver.cores": "1"
}
}
}' -H "Content-Type: application/vnd.graal.systems.v1.job+json;charset=UTF-8" -X POST $URL/jobs | jq -r -c '.id')
Check that the job has been correctly added
curl -s "${OPTS[@]}" $URL/jobs/$JOB_ID
Add a run
RUN_ID=$(curl -s "${OPTS[@]}" -d '{
"name": "Run for @2020-09-20",
"initiator": "datafactory-XXX",
"description": "Run baby, run"
}' -H "Content-Type: application/vnd.graal.systems.v1.run+json;charset=UTF-8" -X POST $URL/jobs/$JOB_ID/runs | jq -r -c '.id')
View the status of a run
curl -s "${OPTS[@]}" $URL/jobs/$JOB_ID/runs/$RUN_ID | jq -r -c '.status'
And the logs
curl -s "${OPTS[@]}" $URL/jobs/$JOB_ID/runs/$RUN_ID/logs
And finally, delete the job
curl "${OPTS[@]}" -X DELETE $URL/jobs/$JOB_ID
via PowerShell
Define common vars:
$baseUrl="https://api.graal.systems/api/v1/"
Invoke-WebRequest "$baseUrl/jobs"
$headers = @{
"Content-Type" = "application/vnd.graal.systems.v1.job+json"
}
Prepare a Spark Scala/Java job:
$body = @{
"name" = "Demo"
"description" = "Demo Spark job"
"timeout_seconds" = 600
"max_retries" = 1
"parameters" = "--path", "test"
"metadata" = @{
"debug" = "all"
}
"options" = @{
"type" = "spark"
"main_class_name" = "fr.layer4.data.spark.processing.Demo"
"namespace" = "data"
"docker_image" = "spark:3.0.1-jre11-hadoop3.2-azure"
"file_url" = "examples-1.0-SNAPSHOT-uber.jar"
"conf" = @{
"spark.executor.instances" = "1"
"spark.executor.memory" = "512m"
"spark.driver.cores" = "1"
}
}
}
Or a Python job:
$body = @{
"name" = "Demo"
"description" = "Demo PySpark job"
"options" = @{
"type" = "spark"
"docker_image" = "spark-py:3.0.1-jre11-hadoop3.2-azure"
"file_url" = "main.py"
"py_files" = @("bundle.zip")
"conf" = @{
"spark.executor.instances" = "1"
"spark.executor.memory" = "512m"
"spark.driver.cores" = "1"
}
}
}
And run it
$job = Invoke-WebRequest "$baseUrl/jobs" -Method Post -Body ($body | ConvertTo-Json) -Headers $headers | ConvertFrom-Json
$headers = @{
"Content-Type" = "application/vnd.graal.systems.v1.run+json"
}
$body = @{
"name" = "Run for partition @2020-09-20"
"initiator" = "datafactory-XXX"
"description" = "Demo Spark job"
}
$jobId=$job.id
$run = Invoke-WebRequest "$baseUrl/jobs/$jobId/runs" -Method Post -Body ($body | ConvertTo-Json) -Headers $headers -UseBasicParsing | ConvertFrom-Json
$runId=$run.id
Invoke-WebRequest "$baseUrl/jobs/$jobId/runs/$runId" -UseBasicParsing | ConvertFrom-Json
View logs
Invoke-WebRequest "$baseUrl/jobs/$jobId/runs/$runId/logs" | Select-Object -Expand Content
Delete job
Invoke-WebRequest "$baseUrl/jobs/$job" -Method Delete