1 / 76

Keeping Up with z/OS’ Alphabet Soup

Keeping Up with z/OS’ Alphabet Soup. Darrell Faulkner Computer Associates Development Manager NeuMICS . Objectives. Integrated Coupling Facility ( ICF ) and Integrated Facilities for LINUX ( IFL ) PR/SM and LP s Intelligent Resource Director ( IRD ) IBM License Manager ( ILM )

issac
Download Presentation

Keeping Up with z/OS’ Alphabet Soup

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

  2. Objectives • Integrated Coupling Facility (ICF) and Integrated Facilities for LINUX (IFL) • PR/SM and LPs • Intelligent Resource Director (IRD) • IBM License Manager (ILM) • Capacity Upgrade on Demand (CUoD) • Conclusions

  3. Acronyms CF - Coupling Facility CP - Central Processor CPC - Central Processor Complex ICF- Integrated Coupling Facility IFL - Integrated Facility for LINUX PR/SM- Processor Resource/Systems Manager HMC - Hardware Management Console LLIC- LPAR Licensed Internal Code Logical CP - Logical Processor LP - Logical Partition LPAR - Logical Partitioning (LPAR mode) PU- Processor Unit

  4. ICF and IFL • Beginning with some IBM G5 processor models, the ability to configure PUs (Processing Units) as non-general purpose processors • Benefit - Does not change model number hence no software licensing cost increase IFL - Integrated Facility for LINUX ICF - Integrated Coupling Facility

  5. PU PU PU PU PU PU PU PU PU PU PU PU PU All contain a 12 PU MultiChip Module (MCM) CPC MEMORY Central (General) Processor CP Processing Unit (PU) Integrated Coupling Facility ICF Integrated Facility for LINUX IFL System Assist Processor SAP z900 Models 2064-(101-109)

  6. PU PU PU PU CP CP CP PU SAP SAP CP CP All contain a 12 PU MultiChip Module (MCM) CPC MEMORY CP Central (General) Processor z900 Model 2064-105 5 PUs Configured as CPs = Model 105 CPs Defined = Model Number

  7. CP IFL IFL CP CP CP CP PU SAP SAP SAP One PU always left unconfigured for “spare” CPC MEMORY 5 PUs Configured as CPs = Model 105 CPs Defined = Model Number CP Central (General) Processor ICFs, IFLs, and SAPs do not incur software charges ICF ICF ICF IFL z900 Model 2064-105

  8. ICFs and IFLs IBM SMF Type 70 subtype 1 record - CPU Identification Section There is one section per EBCDIC name that identifies a CPU type. 'CP' and 'ICF', with appropriate trailing blanks, are examples of EBCDIC names describing a General Purpose CPU and an Internal Coupling Facility CPU, respectively. Name = SMF70CIN Length = 16 EBCDIC Description = CPU-identification Name Offsets 0 0 As of z/OS Version 1 Release 2, both IFLs and ICFs are represented by ‘ICF’ in the SMF type 70 CPU ID Section IFL - Integrated Facility for LINUX • CP = Central Processor ICF - Integrated Coupling Facility

  9. PR/SM LPAR • Allows up to 15 images (LPs) per CPC • Different control programs on images • (z/OS, z/VM, Linux, CFCC etc.) • Each LP (image) assigned CPC resources: • Processors (CPs) (referred to as “logical CPs”) • Memory • Channels • Each LP either DEDICATED or SHARED • Logical CP = Logical Processor • LP = Logical Partition • CPC = Central Processor Complex

  10. PR/SM Benefits • Protection/isolation of business critical applications from non-critical workloads • Isolation of test operating systems • Workload Balancing • Different operating systems -- same CPs • Ability to guarantee minimum percent of shared CP resource to each partition • More “white space” – the ability to handle spikes and unpredictable demand

  11. LP Configuration Decisions • LP definitions entered on HMC • Dedicated or not-dedicated (shared) • Logical processors (initial, reserved) • Weight (initial, min, max) • Capped or not-capped • CPC memory allocation • I/O Channel distribution/configuration • More • HMC = Hardware Management Console • LP = Logical Partition • CPC = Central Processor Complex

  12. HMC Image Profile ZOS1 Dedicated LPs • LPs logical CPs are permanently assigned to specific CPC physical CPs • Less LPAR overhead (than shared LPs) • Dedicated LPs waste physical (CPC) processor cycles unless 100% busy • When less than 100% busy, the physical CPs assigned to dedicated LPs are IDLE • Logical CP = Logical Processor • LP = Logical Partition • CPC = Central Processor Complex

  13. LCP CP CP CP CP LCP LCP LCP CP LCP LPAR MODE - Dedicated PR/SM LPAR LIC CPC MEMORY ZOS1 ZOS2 ZOS1 Image - 3 Dedicated Logical ProcessorsZOS2 Image - 2 Dedicated Logical Processors Same problem as basic mode - Unused cycles wasted • LCP = Logical CP = Logical Processor

  14. Shared LPs HMC Image Profile ZOS1

  15. Shared LPs HMC Image Profile ZOS2

  16. LCP LCP CP CP CP CP CP LCP LCP LCP CPC MEMORY ZOS1 ZOS2 Shared CP Pool LCP LCP LCP ZOS1 Image 5 Logical CPs Weight 400 ZOS2 Image 3 Logical CPs Weight 100 LPAR Mode - Shared PR/SM LPAR LIC • LCP = Logical CP = Logical Processor

  17. LPAR Dispatching What does LLIC (LPAR Licensed Internal Code) Do? • LCPs are considered dispatchable units of work • LCPs placed on ready queue • LLIC executes on physical CP • it selects a ready LCP and • dispatches it onto real CPs • z/OS executes on physical CP until timeslice expires (12.5-25 milliseconds) or until z/OS enters a wait state • Environment saved, LLIC executes on freed CP • If LCP still ready (used timeslice), it is placed back on ready queue • LLIC = LPAR Licensed Internal Code • CP = Central Processor • LCP = Logical CP = Logical Processor

  18. Selecting Logical CPs • Priority on the “ready” queue is determined by PR/SM LIC • Based on LP logical CP “actual” utilization versus “targeted” utilization • Targeted utilization is determined as a function of #LCPs and LP Weight • LP weight is a user specified number between 1 and 999 (recommended 3 digits) • LP = Logical Partition • LLIC = LPAR Licensed Internal Code • LCP = Logical CP = Logical Processor • CP = Central Processor

  19. LCP LCP LCP CP CP CP CP CP LCP LCP Shared CP Pool LCP LCP LCP LP Weights -Shared Pool % PR/SM LPAR LIC CPC MEMORY ZOS1 ZOS2 400 100 ZOS1 Image ZOS2 Image • Total of LP Weights = 400 + 100 = 500 • ZOS1 LP Weight % = 100 * 400/500 = 80% • ZOS2 LP Weight % = 100 * 100/500 = 20% • LP = Logical Partition • LCP = Logical CP = Logical Processor

  20. LP Weights Guarantee“Pool” CP % Share • Weight assigned to each LP defined as shared • All active LP weights summed to Total • Each LP is guaranteed a number of the pooled physical CPs based on weight% of Total • Based on #shared logical CPs defined for each LP & LP weight%, LLIC determines the “ready queue” priority of each logical CP • Weight priority enforced only when contention! • LP = Logical Partition • LLIC = LPAR Licensed Internal Code • LCP = Logical CP = Logical Processor • CP = Central Processor

  21. LCP CP CP CP CP CP LCP LCP LCP LCP Shared CP Pool LCP LCP LCP • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor LP Target CPs PR/SM LPAR LIC CPC MEMORY ZOS1 ZOS2 400 100 ZOS1 Image ZOS2 Image • ZOS1 LP Weight % = 80% • Target CPs = 0.8 * 5 = 4.0 CPs • ZOS2 LP Weight % = 20% • Target CPs = 0.2 * 5 = 1.0 CPs

  22. CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor LP Logical CP share • ZOS1 LP is guaranteed 4 physical CPs • ZOS1 can dispatch work to 5 logical CPs • Each ZOS1 logical CP gets 4/5 or 0.8 CP • ZOS1 effective speed = 0.8 potential speed • ZOS2 LP is guaranteed 1 physical CP • ZOS2 can dispatch work to 3 logical CPs • Each ZOS2 logical CP gets 1/3 or 0.333 CP • ZOS2 effective speed = 0.333 potential speed

  23. Impact of Changing Weights • An active LP’s weight can be changed non-disruptively using system console • Increasing an LP’s weight by “x”, without any other configuration changes, increases its pooled CP share at the expense of all other shared LPs • This is because the TOTAL shared LP weight increased, while all other sharing LPs weights remained constant: LPn weight LPn weight > TOTAL (TOTAL + x) • LP = Logical Partition • CP = Central Processor

  24. CP LCP CP CP CP CP LCP LCP LCP LCP PR/SM LPAR LIC Shared CP Pool CPC MEMORY ZOS1 ZOS2 LCP LCP LCP ZOS1 Image ZOS2 Image • Total of LP Weights = 400 + 200 = 600 • ZOS1 LP Weight % = 100 * 400/600 = 66.67% • ZOS2 LP Weight % = 100 * 200/600 = 33.33% • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor Changing LPAR Weights 100 400 + 100

  25. LCP LCP LCP LCP CP CP CP CP CP LCP Shared CP Pool LCP LCP LCP ZOS1 Image ZOS2 Image • ZOS1 Weight % = 66.67% • Target CPs = 0.667 * 5 = 3.335 CPs • ZOS2 LP Weight % = 33.33% • Target CPs = 0.333 * 5 = 1.665 CPs • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor LP Target CPs PR/SM LPAR LIC CPC MEMORY ZOS1 ZOS2 400 200

  26. CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor LP Logical CP share • ZOS1 LP is guaranteed 3.335 physical CPs • ZOS1 can dispatch work to 5 logical CPs • Each ZOS1 logical CP gets 3.335/5 or 0.667 CP • ZOS1 effective speed = 0.667 potential speed • ZOS2 LP is guaranteed 1.665 physical CP • ZOS2 can dispatch work to 3 logical CPs • Each ZOS2 logical CP gets 1.665/3 or 0.555 CP • ZOS2 effective speed = 0.555 potential speed

  27. CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor Changing Logical CP Count • An active LP’s logical CPs can be increased or reduced non-disruptively • Changing the number of logical CPs for a shared LP increases or decreases the LP work “potential” • Changes z/OS and PR/SM overhead • Does not change the % CPC pool share • Changes the LP logical CP “effective speed” • CPC = Central Processor Complex

  28. L LCP LCP LCP LCP CP CP CP CP CP L L L L LCP PR/SM LPAR LIC Shared CP Pool CPC MEMORY ZOS1 ZOS2 LCP LCP LCP LCP WEIGHT % UNCHANGED!! Adding Logical CPs 400 100 + ZOS1 Image ZOS2 Image • Total LP Weights = 400 + 100 = 500 • ZOS1 LP Weight % = 100 * 400/500 = 80% • ZOS2 LP Weight % = 100 * 100/500 = 20%

  29. CP CP CP CP CP PR/SM LPAR LIC Shared CP Pool CPC MEMORY ZOS1 ZOS2 • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor TARGET CPs UNCHANGED!! Adding Logical CPs 400 100 LCP LCP LCP LCP LCP LCP LCP LCP LCP ZOS1 Image ZOS2 Image • ZOS1 Weight % = 80% • Target CPs = 0.8 * 5 = 4.0 CPs • ZOS2 LP Weight % = 20% • Target CPs = 0.2 * 5 = 1.0 CPs

  30. CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor Adding Logical CPs • ZOS1 LP is guaranteed 4 physical CPs • ZOS1 can dispatch work to 5 logical CPs • Each ZOS1 logical CP gets 4/5 or 0.8 CP • ZOS1 effective speed = 0.8 potential speed • ZOS2 LP is guaranteed 1 physical CP • ZOS2 can dispatch work to4logical CPs • Each ZOS2 logical CP gets 1/4 or 0.25 CP • ZOS2 effective speed = 0.25 potential speed ZOS2 Effective logical CP speed DECREASED!!

  31. LCP LCP LCP LCP LCP CP CP CP CP CP PR/SM LPAR LIC Shared CP Pool CPC MEMORY ZOS1 ZOS2 LCP LCP LCP LCP WEIGHT % UNCHANGED!! • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor Subtracting Logical CPs 400 100 - ZOS1 Image ZOS2 Image • Total LP Weights = 400 + 100 = 500 • ZOS1 LP Weight % = 100 * 400/500 = 80% • ZOS2 LP Weight % = 100 * 100/500 = 20%

  32. LCP CP CP CP CP LCP CP LCP LCP LCP LCP PR/SM LPAR LIC Shared CP Pool CPC MEMORY ZOS1 ZOS2 LCP • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor TARGET CPs UNCHANGED!! Subtracting Logical CPs 400 100 ZOS1 Image ZOS2 Image • ZOS1 Weight % = 80% • Target CPs = 0.8 * 5 = 4.0 CPs • ZOS2 LP Weight % = 20% • Target CPs = 0.2 * 5 = 1.0 CPs

  33. CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor Subtracting Logical CPs • ZOS1 LP is guaranteed 4 physical CPs • ZOS1 can dispatch work to 5 logical CPs • Each ZOS1 logical CP gets 4/5 or 0.8 CP • ZOS1 effective speed = 0.8 potential speed • ZOS2 LP is guaranteed 1 physical CP • ZOS2 can dispatch work to 2logical CPs • Each ZOS2 logical CP gets 1/2 or 0.5 CP • ZOS2 effective speed = 0.5 potential speed ZOS2 Effective logical CP speed INCREASED!!

  34. Logical CPs - How Many? • Both z/OS and PR/SM overhead minimized when LCP count is equal to the physical CP requirements of the executing workload • The number of LCPs online to an LP is correct … sometimes … • When the LP is CPU constrained, too few • When the LP is idling, too many • When the LP is about 100% busy, just right! • Ideally, effective LCP speed = 1.0 • CP = Central Processor • LP = Logical Partition • LCP = Logical CP = Logical Processor

  35. LP Configuration Decisions • LP definitions entered on HMC • Dedicated or not-dedicated (shared) • Logical processors (initial, reserved) • Weight (initial, min, max) • Capped or not-capped • CPC memory allocation • I/O Channel distribution/configuration • etc • HMC = Hardware Management Console • CPC = Central Processor Complex

  36. HMC Image Profile LP “Hard” Capping • Initial weight enforced • LLIC will not allow LP to use more than guaranteed shared pool % even when other LPs idle • Dynamic change to capping status • Capped or not capped • Capped weight value • In general, not recommended • LLIC = LPAR Licensed Internal Code • LP = Logical Partition

  37. Intelligent Resource Director • IRD brings four new functions to the parallel SYSPLEX that help insure important workloads meet their goals • WLM LPAR Weight Management • WLM Vary CPU Management • Dynamic Channel-Path Management • Channel Subsystem I/O Priority Queueing

  38. IRD = PR/SM + WLM • IRD WLM CPU management allows WLM to dynamically change the weights and number of online logical CPs of all z/OS shared LPs in a CPC LPAR cluster • IRD WLM Weight Management • Allows WLM to instruct PR/SM to adjust shared LP weight • IRD WLM Vary CPU Management • Allows WLM to instruct PR/SM to adjust logical CPs online to LPs • LP = Logical Partition • Logical CP = Logical Processor

  39. HMC Image Profile ZOS1 IRD Prerequisites • Running z/OS in 64-bit mode • Running z/900 in LPAR mode • Using shared (not dedicated) CPs • No hard LP caps • Running WLM goal mode • LPs must select “WLM Managed” • Access to SYSPLEX coupling facility • LP = Logical Partition

  40. dedicated ZOSX ZOSD ZOSY dedicated Linux ZOSZ z/VM ZOSA ZOS1 ZOSB ZOS2 shared shared ZOSC ZOS3 ZOSD ZOS4 ZOSE ZOS5 SYSPLEX1 SYSPLEX2 What is an LPAR Cluster? An LPAR cluster is the set of all z/OS shared LPs in the same z/OS parallel SYSPLEX on the same CPC z900 z900 • CPC = Central Processor Complex

  41. What is an LPAR Cluster? 4 LPAR clusters in this configuration (color coded) z900 z900 dedicated ZOSX ZOSD ZOSY dedicated Linux ZOSZ z/VM ZOSA ZOS1 ZOSB ZOS2 shared shared ZOSC ZOS3 ZOSD ZOS4 ZOSE ZOS5 SYSPLEX1 SYSPLEX2

  42. WLM LPAR Weight Management • Dynamically changes LP Weights • Donor Receiver Strategy • WLM Evaluates all SYSPLEX Workloads • Suffering Service Class Periods (SSCPs) • High (>1) SYSPLEX Performance Index (PI) • High Importance • CPU delays • LP = Logical Partition • WLM = Workload Manager

  43. WLM Policy Adjustment Cycle • IF The SSCP is missing goal due to CPU Delay • and WLM cannot help the SSCP by adjusting • dispatch priorities within an LP • THEN WLM and PR/SM start talking --- • Estimate impact of increasing SSCPs LP weight • Find donor LP if there will be SSCP PI improvement, • Donor LP must contain heavy CPU using SCP • Evaluate impact of reducing donor LPs weight • - Cannot hurt donor SCPs with >= importance • WLM changes weights via new LPAR interface • WLM = Workload Manager SSCP = Suffering Service Class Periods PI = Performance Index SCP = Service Class Period

  44. Rules and Guidelines • 5% from donor, 5% to receiver • No “recent” LP cluster weight adjustments • Must allow time for impact of recent adjustments. Avoid see-saw effect • Receiver and Donor LPs will always obey specified min/max weight assignments • Non-z/OS images unaffected because total shared LP weight remains constant! • LP = Logical Partition

  45. Goals Should Reflect Reality MAKE SURE YOUR SCP GOALS AND IMPORTANCE REFLECT REALITY AT THE LPAR CLUSTER LEVEL! Because WLM thinks you knew what you were doing! SCP = Service Class Period • WLM = Workload Manager

  46. Goals / Reality Continued • In the past, the WLM goal mode SCPs on your “test” or “development” LPs had no impact on “production” LPs • If part of the same LPAR cluster, • IRD will take resource away (decrease weight) of “production” LP, • Add resource (increase weight) of “test” LP to meet the goal set for a SCP of higher importance on “test” LP • Develop service policy as though all SCPs are running on a single system image • WLM = Workload Manager SCP = Service Class Period • LP = Logical Partition

  47. Workload Manager Level of Importance • WLM uses the Level of Importance YOU assign to make resource allocation decisions! CICS WEBSITE DAVE’S STUFF Importance 1 Importance 2 Importance 3 Importance 4 GUTTER WORK BOB’S STUFF Importance 5

  48. WLM Vary CPU Management • Varies logical CPs online/offline to LPs • Goals: • Higher effective logical CP speed • Less LPAR overhead and switching • Characteristics: • Aggressive: Vary logical CP online • Conservative: Vary logical CP offline • Influenced by IRD LP weight adjustments • LP = Logical Partition • Logical CP = Logical Processor

  49. Vary CPU Algorithm Parameters • Only initially online logical CPs eligible • Operator varied offline not available • If z/OS LP switched to compatibility mode, • all IRD weight and vary logical CP adjustments “undone”. • LP reverts to initial CP and weight settings • CP = Central Processor • LP = Logical Partition • Logical CP = Logical Processor

  50. LCP LCP LCP What is Online Time? RMF interval: 30 minutes ZOS2 PRESENT (IRD) PAST (pre-IRD) ---------- 30 MINUTES ---------- ---------- 30 MINUTES ---------- LCP 0 LCP 0 LCP 1 LCP 1 LCP 2 LCP 2 Note: LCP 2 varied offline during interval Interval time=online time LPC Dispatch Time In the past, RMF only indicated that LPC 2 was not online at end of interval. Now, RMF reports the online time for each LPC for each partition. Previously, the length of the interval was the MAX time that each LCP could be dispatched. RMF reports on the actual dispatch time.

More Related