2
headersAPI = {
     'Content-Type': 'application/json'
      , 'accept': 'application/json'
       ,'Authorization': 'Bearer XXXXXXXXXXXXXXXXXXXXXXXXXX',
}
skill_response=requests.get("XXXXXX",headers=headersAPI),headers=headersAPI)

log.info(skill_response.text)
skill_json=skill_response.json()
print(skill_json)  ##print the json data and verified
    
log.info('skills data')
log.info(skill_json["status"]) 
        
DataSink0 = glueContext.write_dynamic_frame.from_options(frame =
   skill_json, connection_type = "s3", format = "csv", connection_options=
   {"path": "s3://xxxxx/", "partitionKeys": []}, transformation_ctx= "DataSink0")

job.commit()

TypeError: frame_or_dfc must be DynamicFrame orDynamicFrameCollection. Got <class 'dict'>

While writing to S3 I get this error: 'dict' object has no attribute '_jdf'

Robert Kossendey
  • 6,733
  • 2
  • 12
  • 42
Chandar
  • 31
  • 4

1 Answers1

4

Transforming a JSON Response to a DynamicFrame is possible by creating a DataFrame from the response string first (discussed here) and then transform this DataFrame into a DynamicFrame.

This example should work:

import requests
from awsglue.job import Job
from pyspark.context import SparkContext

from awsglue import DynamicFrame
from awsglue.context import GlueContext

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)

r = requests.get(url='https://api.github.com/users?since=100')

df = spark.read.json(sc.parallelize([r.text]))

dynamic_frame = DynamicFrame.fromDF(
    df, glue_ctx=glueContext, name="df"
)

#dynamic_frame.show()

DataSink0 = glueContext.write_dynamic_frame.from_options(
    frame=dynamic_frame,
    connection_type="s3", format="csv",
    connection_options={"path": "s3://xxxxx/",
                        "partitionKeys": []},
    transformation_ctx="DataSink0")
job.commit()
  • Thanks Johannes. I can see the file in s3 bucket. The file name looks weird run-DataSink0-6-part-r-00000 without .json extension. The data looks good when viewed in notepad. Can we change the name of the file to skill_data.json? – Chandar Jul 28 '21 at 08:19
  • Unfortunately we can not. That is the way that Spark is naming files. – Robert Kossendey Jul 28 '21 at 11:01