Home > other >  Flask - apscheduler timing task after persistent storage, how to get from the database
Flask - apscheduler timing task after persistent storage, how to get from the database

Time:11-24

All bosses in flask - apscheduler after persistent storage, when the flask after the restart, how to go from mongo will be stored in timing task is loaded into the scheduler?

1, has been completed in the config in flask timing task set, the following code

 class APSchedulerJobConfig (object) : 
# timing task configuration
The JOBS=[
# test task
# {
# 'id' : 'shop_list'
# 'func' : 'app. Sched_tasks. Add_tasks. Task_calls: shop_list',
# 'args' : (' shop_list'),
# 'trigger' : 'the cron,
# 'second' : '*/10'
#},
# production tasks, cron cycle timing task, every time to get involved in getting goods data
{
'id' : 'juhuasuan_goods_items_list_cron',
'func' : 'app. Sched_tasks. Add_tasks. Task_calls: juhuasuan_goods_items_list',
'the args' : (' juhuasuan_goods_items_list_cron'),
'trigger: {
'type' : 'the cron,
'hour' : '12', # 12 noon every day start
# 'minute' : '0',
# 'second' : '0'
}
},
# production tasks, the interval cycle interval task, every once in a while, start getting store monitoring tasks, get the latest involved in getting goods
{
'id' : 'juhuasuan_shop_list_interval',
'func' : 'app. Sched_tasks. Add_tasks. Task_calls: juhuasuan_shop_list',
'the args' : (' juhuasuan_shop_list_interval'),
'trigger:' interval ',
# # 'seconds' : 10 every 10 seconds to perform a
'minutes: 10 # every 10 minutes to perform a
},
# production task: the date one-time task, when the flask system starts up, the task execution time
# {
# 'id' : 'juhuasuan_shop_list_date'
# 'func' : 'app. Sched_tasks. Add_tasks. Task_calls: juhuasuan_shop_list',
# 'args' : (' juhuasuan_shop_list_date'),
# 'next_run_time: datetime. Now () + timedelta (seconds=10)
#},
# production task: the date one-time task, when the flask system starts up, the task execution time
# {
# 'id' : 'juhuasuan_goods_items_list_date'
# 'func' : 'app. Sched_tasks. Add_tasks. Task_calls: juhuasuan_goods_items_list',
# 'args' : (' juhuasuan_goods_items_list_date'),
# 'next_run_time: datetime. Now () + timedelta (seconds=10)
#}
]
Config='mongo://{}, {} @ {}, {}/{}' # connect to the database information
The config=config. The format (
Quote_plus (Config. FLASK_DB_MONGO_USER),
Quote_plus (Config. FLASK_DB_MONGO_PASSWORD),
Config. FLASK_DB_MONGO_ADDRESS_OA Config. FLASK_DB_MONGO_PORT Config. FLASK_DB_MONGO_BI
)
The client=MongoClient (config, read_preference=ReadPreference. On)
SCHEDULER_JOBSTORES={
'default' : MongoDBJobStore (collection='t_bi_sc_apscheduler_jobs', the database=Config. FLASK_DB_MONGO_BI, client=client),
'mongo: MongoDBJobStore (collection=' t_bi_sc_apscheduler_jobs', the database=Config. FLASK_DB_MONGO_BI, client=client)
}
SCHEDULER_EXECUTORS={
'default' : {' type ':' threadpool ', 'max_workers: 20}
}
SCHEDULER_JOB_DEFAULTS={
'coalesce: True,
'max_instances: 3,
'misfire_grace_time: 3600
}
# APScheduler timing task switch
SCHEDLER_API_ENABLED=True
# solve FLASK DEBUG mode timing task execution twice
WERKZEUG_RUN_MAIN=True


2, flask: app. Set of py scheduler to register, because the project using gunicorn + gevent start worker, when registered the scheduler so, using the lock mechanism, the code is as follows:

 # registered APScheduler timing task module, use locking mechanism, solve the gunicorn create multiple instances 
App. Config. From_object (APSchedulerJobConfig)
F=open (" scheduler. Lock ", "wb")
Try:
An FCNTL. Flock (f, an FCNTL LOCK_EX | an FCNTL. LOCK_NB)
The scheduler. Init_app (app)
The scheduler. The start ()
Except:
Pass
Def unlock () :
An FCNTL. Flock (f, an FCNTL LOCK_UN)
F. lose ()
Atexit. Register (unlock)


3, because use gunicorn, so each worker can't obtain the app context between the context, so, just set in the add_tasks py creates the scheduler instance, app. Set it registered in py, introducing add_tasks. Set it to create an instance of py, thus solving the relationship between context, while the rest of the worker need to use the scheduler. Add_job, introducing add_tasks. Set py scheduler instance can obtain the context of the context, the code is as follows:

 the from flask import Blueprint 

The from flask_apscheduler import APScheduler


Sched=Blueprint (' sched, __name__)

The scheduler=APScheduler ()


4, the scheduler add_job use, the code is as follows:

 # start timing task, one-time task, modify, add tasks to the task list, the system to perform a task list for five minutes, periodic task 
Args=(params [' task_id], record [' monitor_shop '])
Job={
'id' : params [' task_id],
'func' : 'app. Sched_tasks. Add_tasks. Task_calls: juhuasuan_shop_data',
'the args' : the args,
'seconds' : 10,
}
Next_time=datetime. Now () + timedelta (seconds=job [' seconds'])
The scheduler. Add_job (
Func=job [' func '], id=job [' id '], args=job [' args'],
Next_run_time=next_time, jobstore='mongo, replace_existing=True
)


Now the question is: when I am in the mongo database, if there is no task information regularly, so when flask project started, timing task can normal startup, as shown in the figure below:



But when I am directing a task information, restart the flask project, the task will not be able to start, as shown in the figure below for details:



Projects restart, how to make the scheduler derives from the mongo timing task before, to perform tasks, regularly every time certainly cannot boot flask project, have to manually delete the mongo task data regularly, excuse me, how to solve this problem,

CodePudding user response:

Don't sink, for the moment, the solution is to use a shell script to start the project, in shell scripts to run a py file, the original database to empty storage tasks, and then run gunicorn startup project,

But this gives person's feeling is not very nice, but also the original task all have no, so still hope project started, directly from the database to obtain the original task

CodePudding user response:

Solved: no, no, I teach younullnullnullnullnullnullnull
  • Related