• 数据库整理


    过期数据保留期限:Alert以关闭时间开始算起


    #查看当前正在使用中的分区
    select * from PartitionTables
    where IsCurrent = 1

    #执行分区和整理任务:
    USE OperationsManager
    EXEC p_PartitioningAndGrooming      

    ####????手动执行完该存储过程后InternalJobHistory 表中没有记录,且还可以看到该警报历史记录,但看不到详细信息,即Alert表中已经删除,但是在AlertHistory表中还存在数据。

    select * from dbo.Alert
    select * from dbo.AlertHistory

    #查看分区和整理任务(存储过程)执行结果
    select * from InternalJobHistory orderby

    =========================================

    Rule: Optimize Indexes

    • Description: Optimizes indexes in the Operational DB
    • Stored Procedure: p_OptimizeIndexes
    • Schedule: Daily at 2:30 am

    Rule: Partitioning and Grooming

    • Description: System rule that runs a workflow to parition and groom the operational database.
    • Stored Procedure: p_PartitioningAndGrooming
    • Schedule: Daily at 12 midnight

    Rule: Alert Auto Resolve Execute All

    • Description: System rule that auto resolves alerts.
    • Stored Procedure: p_AlertAutoResolveExecuteAll
    • Schedule: Daily at 4:00 am

    FROM:http://www.systemcentercentral.com/database-maintenance-jobs-and-processes-in-operations-manager-and-essentials-2007/

    =========================================

    Uncontrolled Growth on the OpsMgr DB

    All last week the opsmgr DB has been growing. It appears that for some reason the grooming job could not run at midnight.

    Some Symptoms
    1. The RMS was red with many db related errors.
    2. Many Objects were grey.
    3. At a bit after midnight the OpsMgr db went into recovery mode
    4. An event saying the Grooming had failed.

    stored proc p_partitioningandgrooming was failing when it ran p_AlertGrooming

    In short... There was not enough disk space (and therefore log space) to run the grooming job.

    The solution I came up with..

    p_AlertGrooming needs a huge temporary table containing all alerts that are to be deleted and the log file needed about 5Gb to hold it.

    I increased the DaysToKeep setting for alerts then re-ran the cleanup.

    By steadily decreasing the value by one day at a time and rerunning the job I was able to reduce the db size.

    Last Friday the job went normally.

    ----------------------------
    UPDATE dbo.PartitionAndGroomingSettings
    SET DaysToKeep = 30
    WHERE (ObjectName = 'Alert')
    BACKUP LOG OperationsManager WITH TRUNCATE_ONLY
    EXEC p_AlertGrooming

    From:http://www.hulsepetrol.com/blogs/blog4.php/2008/06/30/uncontrolled-growth-on-the-opsmgr-db
     
    ========================================================================
     

    Perhaps you ignored your Operations Database size, perhaps grooming was failing and you didn’t notice, perhaps you wrote a BAD rule, and FLOODED the database with events, or performance data?

    Now, your database is full, and there is no more free space on the disk?

    What if you want to get rid of the data RIGHT NOW?

    We can run grooming manually.  I discuss a bit about the inner-workings of the grooming process HERE.  We can execute grooming by opening SQL Management Studio, and opening a query window against the OpsDB – and running the grooming procedure “EXEC p_PartitioningAndGrooming”.

    You will either get a success – or a failure.  If this fails, it is typically because the transaction log is full, before the job can complete.  If you need more transaction log space, this means you need to groom a LARGE amount of non-partitioned objects.

    Data types:  The most common data types we insert (and have to groom) in the OpsDB are:

    • Alerts
    • Events
    • Performance
    • Performance Signature
    • Discovery data

    Let’s talk about partitioned, and non-partitioned data types.  Events and Performance data in the OpsDB are partitioned.  All the others aren not partitioned.  There are 61 tables to store Events and 61 tables to store Performance data in the operations DB.  Each table represents 1 days worth of storage.  This is done to assist in grooming.  Since there can be a HUGE amount of event and performance data, we groom these by truncating a daily table, which is FAR more efficient than using a “delete from tablename where date > xx/yy/zz”.  Truncating a table uses almost no transaction log space or time, while “delete from” uses a bunch.

    When we groom partitioned data, the first thing we do is truncate the next table in the list, then change the “IsCurrent” marker to the newly empty table.  You can look at this “map” in the PartitionTables table in the database.

    To see which tables we are currently writing to – check out:

    select * from PartitionTables 
    where IsCurrent = 1

    So – IF our opsDB is flooded with data – and we just need to clear up some space to work with…. a way to cheat, is to run the standard grooming stored procedure 62 times.  This will force a truncate of all partitioned data in the database.

    So we would run:  EXEC p_PartitioningAndGrooming in the SQL query window, 62 times.  You can track the progress by running the “IsCurrent” query check above.  This will wipe out all the partitioned data, and free up a ton of space in your DB really quickly.

    For non-partitioned data – there are no shortcuts… you have to groom this the old fashioned way, and wait for it to complete.  Once your DB is healthy again – this will go back to being a quick and painless process.

    FROM:http://blogs.technet.com/b/kevinholman/archive/2010/10/05/uh-oh-operations-database-is-full.aspx

    http://social.technet.microsoft.com/Forums/systemcenter/en-US/b47de027-3767-4291-a035-38183345c471/huge-size-of-database

  • 相关阅读:
    第一阶段冲刺4
    用户场景分析
    最小不重复数
    BOM
    虚拟机下ubuntu系统设置分辨率
    富文本编辑器KindEditor使用
    页面路径设置
    VMware虚拟机不能上网的问题
    Apache Tomcat/7.0.42配置用户
    JFreeChart 横轴文字竖着显示
  • 原文地址:https://www.cnblogs.com/dreamer-fish/p/3290902.html
Copyright © 2020-2023  润新知