Ask Your Question
1

Extract Predix Token

asked 2017-12-08 01:49:28 -0500

Gerhard Willemse gravatar image

updated 2017-12-08 10:50:14 -0500

metadaddy gravatar image

Please give me guidance on how to run curl extract in StreamSets if it is possible

I currently retrieve a daily token to authorize access to a Predix dataset with:

curl 'https://810be097-b1e3-4f9c-b0bb-0d18052080bf.predix-uaa.run.aws-usw02-pr.ice.predix.io/oauth/token' -H 'Pragma: no-cache' -H 'content-type: application/x-www-form-urlencoded' -H 'Cache-Control: no-cache' -H 'authorization: Basic bW9ybmVkZXZpbGxpZXJzX2FwcDoxMDgxME1vcm5lQCM=' --data 'client_id=secret_app&grant_type=client_credentials'
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

answered 2018-02-27 00:09:27 -0500

Folks, The first option seems like an ideal approach for me but I have hit a basic snag - the HTTP Client processor configuration seems markedly different from an Origin. In particular, there is no pagination option which - having picked up the Token from the Origin object - I was expecting to use to drive the collection of records from the endpoint. Have I missed a basic step or misunderstood concepts here ? In short: can I configure an HTTP Client processor to send multiple requests and paginate results ( as I can do with an HTTP Client origin) ? Thanks all for any advice you can send my way :-)

edit flag offensive delete link more

Comments

Hi @triballus - you should ask a separate question, rather than posting this as an answer - more people will see it.

metadaddy gravatar imagemetadaddy ( 2018-07-18 21:22:11 -0500 )edit
0

answered 2017-12-08 10:58:56 -0500

metadaddy gravatar image

There are a couple of options here.

  • If retrieving the token is an idempotent operation, that is, you can do it many times a day without a problem, configure an HTTP Client origin in your pipeline to get the token from that URL, with the relevant headers, and then an HTTP Client processor to actually use the token against the Predix API.
  • If you can only call the token API once a day, then you'll need to use something like the Dev Random origin to drive the pipeline, then a script evaluator (Groovy / JavaScript / Jython) with logic to persist the token to a file on disk, and only retrieve it from the API when necessary. You could possibly skip the disk storage and just keep it in the state variable if you envisage the pipeline running continuously, so your script would retrieve it on pipeline start, and then every day.
edit flag offensive delete link more

Comments

Thank you for your advise

Gerhard Willemse gravatar imageGerhard Willemse ( 2017-12-08 21:31:18 -0500 )edit
Login/Signup to Answer

Question Tools

Stats

Asked: 2017-12-08 01:49:28 -0500

Seen: 93 times

Last updated: Dec 08 '17