Home > Back-end >  Testing Cloud Run for a Non HTTP use usecase
Testing Cloud Run for a Non HTTP use usecase

Time:03-29

I'm just getting started with GCP and starting to explore cloud run a bit. I'm using cloud run for NON HTTP purpose, i.e that I do not want to build a web application.I'm simply trying to take a dataset from storage and run some calculations on this data which is mentioned in algorithm. The algorithm will be a part of the DOCKERIMAGE and the processed data will be written into a different file and stored in a different storage location. I tried to build a very basic version of this by taking 5 numbers in a csv file and getting the sum of the first row.The sum will be written into a file and uploaded to a different storage location. Here's the code I wrote in python:

        import os
        import csv
        from datetime import datetime
        from flask import Flask, jsonify 
        from google.cloud import storage
        client = storage.Client()


        time = datetime.datetime.now().strftime("%H%M%S")
        bucket_name = "INPUT_STORAGE"
        blob_name = "file.csv"
        destination_bucket_name = "OUTPUT_STORAGE"
        
        app = Flask(__name__)

        from flask import Flask

    

        @app.route("/")
        def create_file_and_upload_to_gcp_bucket(blob_name,bucket_name):
            blob_file = bucket_name.get_blob(blob_name)
            with open(blob_file, 'r') as f:
                total=0
                for row in csv.reader(f):
                    total  = float(row[0])
                print('The total is {}'.format(total))

            file_name = "my-file.txt"

            file_content = total  datetime.now().strftime('%m-%d-%y %H:%M:%S')
            with open(f"./{file_name}", "w") as file:
                file.write(file_content)

            bucket = client.get_bucket(destination_bucket_name)
            blob = bucket.blob(f"my-folder/{file_name}")
            blob.upload_from_filename(f"./{file_name}")

            #data = blob.download_as_string()

            #return jsonify(success=True, data=data.decode("utf-8")), 200
        if __name__ == '__main__':
            app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))

I wrote the calculation logic by myself but found the rest of the template on the internet (stuff relating to flask, which port to be open etc...)

This is the docker file:

# Python image to use.
        FROM python:3.8

        ENV PYTHONUNBUFFERED True

        # Copy local code to the container image.
        ENV APP_HOME /app
        WORKDIR $APP_HOME
        ENV PORT 8080
        COPY . ./

        # Install production dependencies.
        
        RUN pip install --upgrade google-cloud-storage
        
        # Install any needed packages specified in requirements.txt
        #RUN pip install --trusted-host pypi.python.org 

        # Copy the rest of the working directory contents into the container at /app
        COPY . .

        # Run app.py when the container launches
        ENTRYPOINT ["python", "app1.py"]
        CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app

However, I think I'm missing something because I get the following error when I run this via cloud run : Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information" What Am I missing here?

CodePudding user response:

Looking at your second comment, we see that Python is warning you:

AttributeError: type object 'datetime.datetime' has no attribute 'datetime'

This is because you've already imported datetime from datetime. You have two options:

import datetime

datetime.datetime.now().strftime('%m-%d-%y %H:%M:%S')

or

from datetime import datetime

datetime.now().strftime('%m-%d-%y %H:%M:%S')

You can avoid this long debug process by running your Docker container locally first, before pushing it to cloud

CodePudding user response:

The default listening address for gunicorn is 127.0.0.1:8000. Your command line is only modifying the port number and not the binding address.

Change the command line option to listen on all interfaces instead of localhost:

--bind 0.0.0.0:$PORT

Your question is titled non HTTP use case. Applications running in Cloud Run must support an HTTP Request/Response system.

Container runtime contract

  • Related