290 likes | 861 Views
Monitoring System P environment with Tivoli Monitoring. Divyesh Vaidya Product Manager, IBM Tivoli Monitoring. What is IBM Tivoli Monitoring (ITM)?. Pro-active monitoring of critical components of the IT infrastructure to quickly isolate and diagnose performance problems. With ITM, you can:
E N D
Monitoring System P environment with Tivoli Monitoring Divyesh Vaidya Product Manager, IBM Tivoli Monitoring
What is IBM Tivoli Monitoring (ITM)? • Pro-active monitoring of critical components of the IT infrastructure to quickly isolate and diagnose performance problems With ITM, you can: • Monitor cross-enterprise resources (Distributed and z/OS) • Common, flexible, easy-to-use user interface with customizable workspaces • Out-of-box identification of problems and expert advice for resolution • Automated actions to fix problems before it impacts end-users • Collect monitoring data for historical reports, performance trend analysis
Integrated End to End Support for Heterogeneous Environments IBM Tivoli monitoring spans the breadth of your IT environment and provides fast time to value because it’s so easy to deploy! Universal Agent Business Integration Web Infrastructure Messaging & Collaboration Platforms Databases Applications Unix Agent-less Adapter URL, SNMP, File, Socket, UDB…. Agent Quick attach API SAP CICS WebSphere (Z & Distributed) DB2 (Z & Distributed) Lotus Domino Exchange Windows Web Services .NET (full suite of MS apps) IIS Cluster(s) Oracle IMS Linux iPlanet Citrix WebSphere MQ SQL 100+ Custom Packages available for modification z/OS Sybase Apache VMWare Siebel Tuxedo WebSphere MQ Integrator WebLogic Informix I5/OS System P
Monitoring the System P environment with ITM • ITM System Edition for System P – Entitled offering for AIX / System P customers • Light-weight monitoring of System P • Visualize and manage the health and availability of System p environment • See how virtual resources map to physical ones • Full performance management of AIX / System P (available with ITM v6.1 FP 5) • Visualize and manage the performance of entire System p environment • Historical data collection for improved troubleshooting, capacity planning & service level reporting • Added capability for customer configurable views, situations, and workflows
What is offered with ITM SE for System P • Entitled product for AIX / System P customers • Monitor health and availability of System P System P framework (CEC), AIX LPAR and VIOS • Predefined workspaces, situations and expert advice, all of which may be customized • Visualization of health and availability of entire System p environment from a single console • Topological display of the mapping of virtual resources to physical ones • Robust monitoring of System p servers, based on ITM V6.1 technology for seamless upgrade to the full-featured product http://www.ibm.com/software/tivoli/products/monitor-systemp/
Enhanced System P Monitoring with ITM v6.1 • ITM 6.1 provides : • Enhanced AIX monitors – deeper metrics • Ability to use other Operating System monitors • Application / Database monitors • Data Warehouse – monitor trends / capacity requirements 6
TEP Client Console Database TEPS Management Server Warehouse TEMS HMC OS HMC/IVM CEC Agent HMC Agent AIX AIX ITM for AIX/System P Architecture Console Server ITM Server Topology Availability Performance VIOS Availability Health Performance AIX Availability Health Performance VIOS Premium or Base Agent AIX PremiumAgent AIX BaseAgent AIX Premium Agent System P Server AIX AIX VIOs AIX
Hypervisor view - Resources Allocated per LPAR Global CPU & Memory allocation Total CPU & Memory allocated to LPARs
Hypervisor view - CPU share allocation per LPAR CPU share and mode info for each LPAR
AIX LPAR view – Resource Summary CPU, Memory, Disk, Network Info per LPAR
VIOS view – Virtual I/O Mapping Shows how network interfaces are mapped to LPARS
VIOS view – Storage Mapping Shows how storage devices are mapped to LPARS
HMC View – System Performance Information Shows detailed performance information of HMC server
Out-of-box alerts and expert advice AIX LPAR KPX_memrepage_Info KPX_vmm_pginwait_Info KPX_vmm_pgfault_Info KPX_vmm_pgreclm_Info KPX_vmm_unpin_low_Warn KPX_vmm_pgout_pend_Info KPX_Pkts_Sent_Errors_Info KPX_Sent_Pkts_Dropped_Info KPX_Pkts_Recv_Errors_Info KPX_Bad_Pkts_Recvd_Info KPX_Recv_pkts_dropped_Info KPX_Qoverflow_Info KPX_perip_InputErrs_Info KPX_perip_InputPkts_Drop_Info KPX_perip_OutputErrs_Info KPX_TCP_ConnInit_Info KPX_TCP_ConnEst_Info KPX_totproc_cs_Info KPX_totproc_runq_avg_Info KPX_totproc_load_avg_Info KPX_totnum_procs_Info KPX_perproc_IO_pgf_Info KPX_perproc_nonIO_pgf_Info KPX_perproc_memres_datasz_Info KPX_perproc_memres_textsz_Info KPX_perproc_mem_textsz_Info KPX_perproc_vol_cs_Info KPX_Active_Disk_Pct_Info KPX_Avg_Read_Transfer_MS_Info KPX_Read_Timeouts_Per_Sec_Info KPX_Failed_Read_Per_Sec_Info KPX_Avg_Write_Transfer_MS_Info KPX_Write_Timeout_Per_Sec_Info KPX_Failed_Writes_Per_Sec_Info KPX_Avg_Req_In_WaitQ_MS_Info KPX_ServiceQ_Full_Per_Sec_Info KPX_perCPU_syscalls_Info KPX_perCPU_forks_Info KPX_perCPU_execs_Info KPX_perCPU_cs_Info KPX_Tot_syscalls_Info KPX_Tot_forks_Info KPX_Tot_execs_Info KPX_LPARBusy_pct_Warn KPX_LPARPhyBusy_pct_Warn KPX_LPARvcs_Info KPX_LPARfreepool_Warn KPX_LPARPhanIntrs_Info KPX_LPARentused_Info KPX_LPARphyp_used_Info KPX_user_acct_locked_Info KPX_user_login_retries_Info KPX_user_idletime_Info HMC KPH_Busy_CPU_Info KPH_Paging_Space_Full_Info KPH_Disk_Full_Warn KPH_Runaway_Process_InfoThe VIOS KVA_memrepage_Info KVA_vmm_pginwait_Info KVA_vmm_pgfault_Info KVA_vmm_pgreclm_Info KVA_vmm_unpin_low_Warn KVA_vmm_pgout_pend_Infov Networking KVA_Pkts_Sent_Errors_Info KVA_Sent_Pkts_Dropped_Info KVA_Pkts_Recv_Errors_Info KVA_Bad_Pkts_Recvd_Info KVA_Recv_pkts_dropped_Info KVA_Qoverflow_Info KVA_Real_Pkts_Dropped_Info KVA_Virtual_Pkts_Dropped_Info KVA_Output_Pkts_Dropped_Info KVA_Output_Pkts_Failures_Info KVA_Mem_Alloc_Failures_Warn KVA_ThreadQ_Overflow_Pkts_Info KVA_HA_State_Info KVA_Times_Primary_Per_Sec_Info KVA_perip_InputErrs_Info KVA_perip_InputPkts_Drop_Info KVA_perip_OutputErrs_Info KVA_TCP_ConnInit_Info KVA_TCP_ConnEst_Infov Process KVA_totproc_cs_Info KVA_totproc_runq_avg_Info KVA_totproc_load_avg_Info KVA_totnum_procs_Info KVA_perproc_IO_pgf_Info KVA_perproc_nonIO_pgf_Info KVA_perproc_memres_datasz_Info KVA_perproc_memres_textsz_Info KVA_perproc_mem_textsz_Info KVA_perproc_vol_cs_Info KVA_Firewall_Info KVA_memrepage_Info KVA_vmm_pginwait_Info KVA_vmm_pgfault_Info KVA_vmm_pgreclm_Info KVA_vmm_unpin_low_Warn KVA_vmm_pgout_pend_Infov Networking KVA_Pkts_Sent_Errors_Info KVA_Sent_Pkts_Dropped_Info KVA_Pkts_Recv_Errors_Info KVA_Bad_Pkts_Recvd_Info KVA_Recv_pkts_dropped_Info KVA_Qoverflow_Info KVA_Real_Pkts_Dropped_Info KVA_Virtual_Pkts_Dropped_Info KVA_Output_Pkts_Dropped_Info KVA_Output_Pkts_Failures_Info KVA_Mem_Alloc_Failures_Warn KVA_ThreadQ_Overflow_Pkts_Info KVA_HA_State_Info KVA_Times_Primary_Per_Sec_Info KVA_perip_InputErrs_Info KVA_perip_InputPkts_Drop_Info KVA_perip_OutputErrs_Info KVA_TCP_ConnInit_Info KVA_TCP_ConnEst_Infov Process KVA_totproc_cs_Info KVA_totproc_runq_avg_Info KVA_totproc_load_avg_Info KVA_totnum_procs_Info KVA_perproc_IO_pgf_Info KVA_perproc_nonIO_pgf_Info KVA_perproc_memres_datasz_Info KVA_perproc_memres_textsz_Info KVA_perproc_mem_textsz_Info KVA_perproc_vol_cs_Info KVA_Firewall_Info KVA_Active_Disk_Pct_Info KVA_Avg_Read_Transfer_MS_Info KVA_Read_Timeouts_Per_Sec_Info KVA_Failed_Read_Per_Sec_Info KVA_Avg_Write_Transfer_MS_Info KVA_Write_Timeout_Per_Sec_Info KVA_Failed_Writes_Per_Sec_Info KVA_Avg_Req_In_WaitQ_MS_Info KVA_ServiceQ_Full_Per_Sec_Info KVA_perCPU_syscalls_Info KVA_perCPU_forks_Info KVA_perCPU_execs_Info KVA_perCPU_cs_Info KVA_Tot_syscalls_Info KVA_Tot_forks_Info KVA_Tot_execs_Info KVA_LPARBusy_pct_Warn KVA_LPARPhyBusy_pct_Warn KVA_LPARvcs_Info KVA_LPARfreepool_Warn KVA_LPARPhanIntrs_Info KVA_LPARentused_Info KVA_LPARphyp_used_Info KVA_user_acct_locked_Info KVA_user_login_retries_Info KVA_user_idletime_Info
Alert from VIOS due to Threshold Violation Information alert because of low cpu utilization
Who to Contact for additional information Sales and Technical Enablement • James Katz, Technical Enablement, jakatz@us.ibm.com, 916-768-4945 Product Management • Divyesh Vaidya, ITM Product Manager, dvaidya@us.ibm.com, 512-838-2516 Market Management • Todd Kindsfather, ITM Market Manager, tkindsfa@us.ibm.com, 469-287-2221
IBM System p5 IBM IBM IBM server server server server server server pSeries pSeries pSeries IBM IBM IBM The Benefits of Virtualisation • Shared processors • Increased server utilisation • Potential for reduction in software costs • Shared adapters • Reduction in server costs • Reduction in infrastructure and provisioning costs • Rapid service provisioning • Take early advantage of new market opportunities • React to short-term development needs Shared Process Pool Shared Disk Adapters Shared Network Adapters Fewer switches and routers
IBM System p5 Why consolidate? One application per server • Under-utilised servers or partitions • Application growth requires more frequent server provisioning Large Systems with shared capacity • Reduce capacity requirements • Be less dependent on sizing accuracy • Aggregate capacity for growth • Handle peak workloads with shared resource • Minimise application movement and server provisioning • Adopt a utility charging model
POWER5+ Systems IBM IBM server server server server pSeries pSeries IBM IBM pSeries Consistency • Binary compatibility • Mainframe-inspired reliability • Support for virtualisation • AIX and/or Linux Complete flexibility for workload deployment Enterprise Mid-range p5 575 Entry p5 595 p5 590 p5 570 p5 560Q p5 550/Q p5 510/Q p5 520/Q p5 505/Q
Part#2 Part#1 Production Part#3 Part#4 Legacy Apps Test/ Dev File/ Print Linux AIX 5L AIX 5L AIX 5L Hypervisor Dynamic LPAR • Standard on all new systems • Allocate processors, memory and I/O to create virtual servers • Minimum 128 MB memory, one CPU, one PCI-X adapter slot • All resources can be allocated independently • Resources can be moved between live partitions • Applications notified of configuration changes • Movement can be automated using Partition Load Manager • Works with AIX 5.2+ or Linux 2.4+ Move resources between live partitions HMC
Micro-Partitions Micro-Partitions • Share a pool of processors • Are allocated CPU resource “on demand” • Use no resource when idle • Get a guaranteed minimum amount of resource when active • Can be capped or uncapped • Can share disk adapters • Can share network adapters • Support dynamic reconfiguration of memory • Are isolated and secure 8 CPUs 3 CPUs 1 CPU AIX 5.3 AIX 5.3 I/O Server Partition Linux 2.6 AIX 5.3 AIX 5.2 i5/OS Hypervisor I/O Adapters LAN, WAN, … Disks
Micro-Partitions – Definitions • Virtual Processors • The number of processors that the operating system “sees” • Can exceed the number of real processors in the shared pool • Sets an upper limit for resource consumption • Capacity Entitlement • The guaranteed minimum amount of resource that a partition gets when it is active • Can be less than or equal to the number of virtual processors • Capped/Uncapped • Capped partitions can use their capacity entitlement and no more • Uncapped partitions get a share of unused capacity in addition to their capacity entitlement • Weight • The relative priority of the partition