• Quarts SimpleTrigger going to BLOCKED state after few repeat intervals--stackoverflow


    question:

    I am using SimpleTrigger to schedule a job which is supposed to run indefinitely (repeat count -1).

    And i am using JDBC store to persist the job state in DB.

    But the trigger is firing for some intervals (in my case always 8) and goes to BLOCKED state. TO be specific, the value of TRIGGERS_STATE will be changed to BLOCKED in QRTZ_TRIGGERS table. Note my prefix for Quartx tables is QRTZ_ Below are my Job Trigger info.

    repeat count: -1, repeat Interval: 6 seconds, start delay: 10 seconds

    MY quartz configurations:

    #===============================================================
    #Configure ThreadPool
    #===============================================================
    org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
    org.quartz.threadPool.threadCount = 10
    org.quartz.threadPool.threadPriority = 5
    org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
    #===============================================================
    #Configure JobStore
    #===============================================================
    org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
    org.quartz.jobStore.misfireThreshold = 60000
    org.quartz.jobStore.maxMisfiresToHandleAtATime=20
    # Flag to turn off to ignore all misfires
    scheduler.ignoreMisfire=no
    
    # Configuring JDBCJobStore with the Table Prefix
    org.quartz.jobStore.tablePrefix = QRTZ_
    
    # Using DriverDelegate
    org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
    org.quartz.jobStore.useProperties = false

    Scheduler Class:

    public static void scheduleJob(Class<? extends Job> job,JobDataMap dataMap) 
    {
    
        Scheduler scheduler = schedulerFactoryBean.getScheduler();
        try
        {
            JobDetail jobDetail = newJob(job)
                    .withIdentity(job.getSimpleName()+"_"+DateUtil.getSystemDate(), job.getSimpleName() + "_group")
                    .storeDurably()
                    .usingJobData(dataMap)
                    .requestRecovery()
                    .build();
    
            SimpleTrigger trigger = (SimpleTrigger) newTrigger()
                    .withIdentity(job.getSimpleName() + "_trigger_"+DateUtil.getSystemDateWithMs(), job.getSimpleName() + "_trigger_group")
                    .startNow()                 
                    .withSchedule(simpleSchedule().repeatSecondlyForever(10).withMisfireHandlingInstructionFireNow())
                    .build();
    
            scheduler.scheduleJob(jobDetail, trigger);
    
            //logger.debug(scheduler.getMetaData().toString());
            scheduler.start();
        }
        catch (SchedulerException e)
        {
            e.printStackTrace();
            throw new SchedulerException("", e);
        }
    }

    Job Class:

    @PersistJobDataAfterExecution
    public class MyJob  Implements Job
    {
        private SessionFactory sessionFactory;
    
        @Override
        public void execute(JobExecutionContext context) throws JobExecutionException
        {
            getBeansFromContext(context);
            Session session = sessionFactory.openSession(); // Hibernate Session Factory
            // to do some DB opetations
        }
    
        private void getBeansFromContext(JobExecutionContext context) throws SchedulerException
        {
            ApplicationContext applicationContext = (ApplicationContext)context.getScheduler().getContext().get("applicationContext");
            this.sessionFactory=applicationContext.getBean(SessionFactory.class);
        }
    }

    Spring bean configration for Quartz scheduler factory.

    <beans:bean id="schedulerFactoryBean"
    class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
    <beans:property name="jobFactory">
        <beans:bean class="org.springframework.scheduling.quartz.SpringBeanJobFactory"></beans:bean>
    </beans:property>
    <beans:property name="dataSource" ref="dataSource" />
    <beans:property name="transactionManager" ref="txManager" />
    <beans:property name="configLocation"
        value="resources/scheduler/Scheduler.properties" />
    <beans:property name="applicationContextSchedulerContextKey"
        value="applicationContext" />
    <beans:property name="autoStartup" value="true" />
    </beans:bean>
    
    <beans:bean id="taskExecutor"
    class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"
    p:corePoolSize="5" p:maxPoolSize="10" p:queueCapacity="100"
    p:waitForTasksToCompleteOnShutdown="true" />

    Any help is really appreciated. Thanks in advance

    Answer

    I finally understood the problem and able to resolve it.

    As @zerologiko commented the issue is with transaction. I am using Spring managed transaction with hibernate. Once i declare my transaction policy, Spring takes care of start/end of transactions.

    Reason for the issue in my case: Spring bean life cycle is not effective in the Scheduler Job. To elaborate on this, as given in main post i had to even accessing applicationContext inside my job class using

    jobContext.getScheduler().getContext().get("applicationContext");

    I am trying to update DB back with some status into one of our transaction database after the job is done.

    I missed to realize initially that even the transaction are also controlled by Spring. When those db updates were triggered from a job class, the transactions declared on my business methods had no effect.

    According to my understanding, the trigger were going to Acquired as the threads which completed the job is not able to come back to pool.

    To fix this problem, i manually opened/closed the transactions in my job class without relying on Spring CMT and it worked without issues.

    Hope this helps someone who is facing same kind of issue.

  • 相关阅读:
    Trie Tree和Radix Tree
    DataNode Layout升级解决Du操作引发的性能问题
    Write-Ahead Log(WAL)的工作原理
    YARN的共享存储服务
    AWS S3存储基于Hadoop之上的一致性保证
    简单聊聊HDFS RBF第二阶段工作近期的一些进展
    基于 Confluence 6 数据中心的 SAML 单点登录设置你的身份提供者
    基于 Confluence 6 数据中心的 SAML 单点登录设置 SSL/TLS
    Confluence 6 基于 Confluence 数据中心的 SAML 单点登录
    Confluence 6 用自带的用户管理
  • 原文地址:https://www.cnblogs.com/davidwang456/p/3858074.html
Copyright © 2020-2023  润新知