Home > other >  Background task in flask that write to database
Background task in flask that write to database

Time:12-13

I want to build a flask app that reads emails from some email address and shows those emails on a route. Here I want to have a background task that fetches all emails and writes them to the database and I want it to run continuously in the background I don't want to run this task when someone requests a route. fetching emails and displaying them should be totally independent. I think multiprocessing should work, but I don't know how to start two separate processes one is for fetching emails and writing them to the database and another one is for the flask app that shows emails from the database is it possible to do so? please help me just give me a demo with multiprocessing where one process is running in the background that writes to the database and another one is flask app the example can be anything it doesn't need to be fetching email anything that writes to the database on a separate process and flask on another process that read from the same database.

Thanks for reading this long. Thanks a lot.

CodePudding user response:

Ran into this problem recently, best course of action is https://apscheduler.readthedocs.io/en/3.x/index.html.

You can create a scheduler to run any function at any time interval or use crontab notation. Just make sure to create the scheduler before your app.run()

E.g.

def print_hello():
    print("hello")

scheduler = BackgroundScheduler() # Create Scheduler
retrain_scheduler.add_job(print_hello, "interval", seconds = 5) # Add job to scheduler
retrain_scheduler.start() # Start Scheduler
app.run()

This will run the function every 5 seconds.

CodePudding user response:

@sam-rees

from flask import jsonify
import multiprocessing
from flask import Flask
import sqlite3

app = Flask(__name__)
app.config['SECRET_KEY'] = ''


def connect_db():
    sql = sqlite3.connect('data.db')
    sql.row_factory = sqlite3.Row
    return sql


def get_db():
    return connect_db()


def task():
    db = get_db()
    l = (i for i in range(90000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000))
    for i in l:
        number = i
        print(number)
        db.execute('insert into test (number) values (?)',
                   [number])
        db.commit()
    db.close()


@app.route('/')
def index():
    db = get_db()
    cur = db.execute('select number from test')
    rows = cur.fetchall()
    data = []
    for row in rows:
        data.append([x for x in row])  # or simply data.append(list(row))
    return jsonify(data)


def run_app():
    app.run()


if __name__ == '__main__':
    p1 = multiprocessing.Process(target=task)
    p2 = multiprocessing.Process(target=run_app)
    p1.start()
    p2.start()
    p1.join()
    p2.join()

is it wrong to do it this way? or I will get into problems?

  • Related