1 / 68

LECTURE 8. Environments, Alternatives, and Decisions

LECTURE 8. Environments, Alternatives, and Decisions Topics Overview Assessing the Target Processing Environment Deciding on Scope and Level of Automation Generating Implementation Alternatives Choosing Implementation Alternatives Considering Outsourced Solutions

Samuel
Download Presentation

LECTURE 8. Environments, Alternatives, and Decisions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LECTURE 8. Environments, Alternatives, and Decisions

  2. Topics • Overview • Assessing the Target Processing Environment • Deciding on Scope and Level of Automation • Generating Implementation Alternatives • Choosing Implementation Alternatives • Considering Outsourced Solutions • Presenting the Results and Making the Decisions • Readings

  3. I. Overview Major activities of the analysis phase include: • Gather information • Define system requirements • Prototype for feasibility and discovery •Prioritize requirements •Generate and evaluate alternatives •Review recommendations with management The focus of this lecture is on the last three activities, which provide the transition from discovery and analysis to solutions and design) This step of SDLC involves the following: • During analysis many more requirements may be determined than can be dealt with • They must be prioritized and evaluated • Several alternative packages of requirements may be developed • A committee of executives and users will decide which of them are most important • Select a system scope and level of automation • Methods of development are reviewed

  4. II. Assessing the Target Processing Environment • The target processing environment should be considered first while selecting an appropriate solution. It includes configuration of computer equipment, operating systems and networks that will exist when the new system is deployed • It should provide a stable environment to support the new system • Design and implementation of the processing environment is one of the early activities in moving from analysis to design Software application functions • Presentation logic (i.e. HCI) • Application logic (i.e. the processing of business rules processing) • Data access logic (i.e. the processing required to access data – database queries in SQL) • Data storage (i.e. data files) There are several alternatives for the processing environment.

  5. Target Processing Environment (cont’d) Centralized Systems • Prior to the early 1970’s there was only one technological environment – the mainframe computer system at a central location • The only options focused around kinds of input/output (e.g. keypunch, key-to-tape, or interactive input using video display terminal) and whether input/output devices would be placed in remote locations • Although they are no longer the preferred platform for deploying ISs, they are still widely used as a subsystem of a larger, sometimes distributed information system or for large-scale batch processing applications (e.g. banking, insurance, government etc.) where: • Some input transactions don’t need to be processed in real time • On-line data-entry personnel can be centrally located • Large numbers of periodic outputs are produced by the system We can distinguish between the three types of centralized systems

  6. Target Processing Environment (cont’d) 1. Single Computer Architecture • Places all information system resources on a single computer system and its directly attached peripheral devices. See Figure 8-1, Section (a) • User interact with the system via simple input/output devices directly connected to the computer • Requires all users be located near the computer • All 4 software application functions are realized on a mainframe computer (server host) – server-based architecture. See Figure 8-2. Advantage: • Simplicity of maintenance: relatively easy to design, build and operate Disadvantage: • The capacity limits make single computer impractical or unusable for large ISs: cannot provide all the required processing, data storage, and data retrieval tasks. However, many systems require more computing power than one single machine can provide (a clustered or multicomputer architecture is required).

  7. Target Processing Environment (cont’d) FIGURE 8-1Single, clustered and multicomputer architectures.

  8. Target Processing Environment (cont’d) FIGURE 8-2Server-based architecture.

  9. Target Processing Environment (cont’d) 2. Clustered and Multicomputer Architectures •Clustered architecture is a group (or cluster) of computers of the same type that have the same operating environment and share resources • Computers from the same manufacturer and model family are networked together • Application programs may be executed on any machine in the cluster without modification due to similar hardware and operating systems • Cluster acts like a single large computer system (program movement and access to resources on other machines occur quickly and efficiently due to rapid and direct communication at the operating system level) • Often one computer may act as entry point and the others function as slave computers. See Figure 8-1, Section (b)

  10. Target Processing Environment (cont’d) •Multicomputer architecture is a group of dissimilar computers that are linked together but the hardware and operating systems are not required to be a similar as in the clustered architecture • Hardware and software differences do not allow movement of application programs between computers (instead, resources are exclusively assigned to each computer system) • System still functions like one single large computer • Can have central computer and slave computers –Main computer may execute programs and hold database –The front-end computer may handle all communication lines with other computers or simple terminals See Figure 8-1, Section (c) Notes on Centralized Systems • Clustered architectures may be cost efficient and provide greater total capacity if similar operating system and hardware are used • Multicomputer architectures are good when the centralized system can be decomposed into relatively independent subsystems (each possibly with its own operating system and/or hardware platform)

  11. Target Processing Environment (cont’d) Distributed Computing • Components of modern IS are typically distributed across many computer systems and geographical locations • E.g. corporate financial data might be stored on a centralized mainframe, linked to minicomputers in regional office (to periodically generate accounting and other reports based on data stored on the mainframe) and personal computers at more locations (to access and view periodic reports as well as to directly update the central database) • Such organization is generally called distributed computing (or distributed processing), i.e. an approach to distributing a system across several computers and locations • This approach is based on communication networks to connect the geographically distributed hardware components • The recent changes in networking technology include: - Rapid increase in transmission capacity - Significant reduction in cost - Standardized methods of constructing and interacting with networks • These improvements made distributed computing the preferred method of deploying the vast majority of business applications

  12. Target Processing Environment (cont’d) Computer Networks • A computer network is a set of transmission lines, equipment and communication protocols to permit sharing of information and resources among different users and computer systems. • Computer networks are divided into two classes (depending on the distance):          - local area network (LAN)          - wide area network (WAN) • A local area networks (LAN) is a computer network where the distances are local (e.g. less than one kilometer long or connects computers in the same building) • A wide area network (WAN) is a computer network across large distances (e.g. may cover city, province, nation or international areas) Figure 8-3 shows the network for RMO (each geographic location is served by a LAN, all LANs are connected by WAN). • A router connects each LAN to the WAN. • A router is a piece of equipment used to direct information within the network (it scans messages on the LAN and copies them to the WAN if they are addressed to a user on another LAN as well as it scans messages on the WAN and copies them to the LAN if they are addressed to a local user or computer)

  13. Target Processing Environment (cont’d) FIGURE 8-3Network configuration for RMO.

  14. Target Processing Environment (cont’d) • LAN and WAN can be built using many technologies: – Ethernet and token rings are typically used to implement LANs (provide low to moderate amount of message-carrying capacity at relatively low cost) –WAN technologies (such as asynchronous transmission mode) are more complex and expensive (provide higher message-carrying capacity and greater reliability) –WAN may be constructed using purchased equipment and leased long-distance transmission lines –WAN setup and operation may be subcontracted from long-distance communication vendors (e.g. AT&T, Sprint etc.) • Many services can be implemented:          – direct communications(telephone and video conferencing)          – message-based communications(e-mail)          – resource sharing (access to electronic documents, application programs and databases) • There are many ways to distribute information resources: users, application programs and databases can be placed on the same computer, on different computers on the same LAN or different computers on different LANs

  15. Target Processing Environment (cont’d) Standard approaches to distributing resources include the following solutions. 1. Client-Server Architecture is currently the dominant architectural model for distributing information resources •Two-tire architecture divides the information system processes into two classes: – Server computer (server): that manages one or more system resources and provides access to those resources and other services to other computers on the network – Client computer: is a computer that uses communication interface to requests services from other computers on the network • Computer software that implements communication protocols on the network is called middleware Figure 8-4 shows client-server architecture with shared printer (an application on a PC sends a document to a server computer on the LAN, the server receives the request via its network interface card and dispatches the request to a management process for the specified printer, when the document is printed, a message is sent back to the PC to notify the user that the printed document is ready) Figure 8-5 shows “fat” client architecture; Figure 8-6 shows “fat” server architecture

  16. Target Processing Environment (cont’d) FIGURE 8-4Client-server architecture with a shared printer.

  17. Target Processing Environment (cont’d) FIGURE 8-5 “Fat” client architecture.

  18. Target Processing Environment (cont’d) FIGURE 8-6 “Fat” server architecture.

  19. Target Processing Environment (cont’d) 2. Client-Server tires (layers) We can consider the following set of client and server processes or layers: • The data layer is a layer on a client-server configuration that manages stored data implemented as one or more databases • The business logic layer contains the programs that implement the rules and procedures of business processing (or program logic of the application) • The view layer contains the user interface and other components to access the system (accepts user input, and formats and displays processing results) • This approach is called tree-layer architecture  Figure 8-7 illustrates the tree-layer architecture (view layer acts as client of the business logic layer, which, in turn, acts as a client of the data layer) Figure 8-8 illustrates the tree-layer architecture from the software application functions prospective

  20. Target Processing Environment (cont’d) FIGURE 8-7Three-layer architecture.

  21. Target Processing Environment (cont’d) FIGURE 8-8Three-layer architecture and the software application functions .

  22. Target Processing Environment (cont’d) • The IS divided into three layer is relatively easy to distribute and replicate across a network (interactions among the layers are always have a form of either request or response) • It makes the layer relatively independent of one another, thus they can be placed on different computer systems with network connections and middleware serving N-Layer Client-Server Architecture • When processing requirements or data resources are complex, three-layer architecture can be expanded into a larger number of layers (n-layer or n-tiered architecture). Figure 8-9 shows an example in which the data layer is split into two separate layers: the combined database server and servers that control the individual databases (marketing, production, accounting). The business logic layer interacts with a combined database server that provides a unified view of the data stored in several different databases. The responses from the individual database servers are then combined to create a single response to send to the business logic layer. Figure 8-10 is an example of a four-tire client-server architecture (2 web servers with application logic is used).

  23. Target Processing Environment (cont’d) FIGURE 8-9N-layer architecture.

  24. Target Processing Environment (cont’d) FIGURE 8-10Four-tier architecture and the software application functions .

  25. Target Processing Environment (cont’d) The Internet and Intranets • The Internet and World Wide Web are becoming an increasingly popular framework for implementing and delivering IS applications. •Internet is a global collection of networks that are interconnected using a common low-level networking standard (protocol) – TCP/IP (Transmission Control Protocol/Internet Protocol) • The Internet provides many services: –E-mail protocols (Simple Mail Transfer Protocol – SMTP) –File transfer protocols (e.g. File Transfer Protocol – FTP) –Remote login and process execution protocols (e.g. Telnet) and remote procedure call • The World Wide Web (WWW) is a collection of resources such as files, programs and services that can be accessed over the Internet using standard protocols, including: –Formatted and linked document protocols, e.g. HyperTexst Markup Language (HTML), Extensible Markup Language (XML), and Hypertext Transfer Protocol (HTTP) –Executable program standards including Java, JavaScript and Visual Basic Script (VBScript) • The Internet is the infrastructure upon which the WWW is based (i.e. resources of the web are delivered to users over the Internet)

  26. Target Processing Environment (cont’d) Intranets and Extranets •Intranet is a private network that uses the same TCP/IP protocol as the Internet but is accessible to a limited number of users (members of the same organization or workgroup) –Restricted access can be accomplished by firewalls, passwords, unadvertised resource names •Extranet is an intranet that has been extended outside of the organization to include outside users (suppliers, large customers, and strategic partners) – It allows organizations to exchange information, coordinate their activities and form in this way a virtual organization • The web is organized as a client-server architecture (Web resources are managed by server processes that can be executed on dedicated servers or on multipurpose computer systems; clients send requests to servers using a standard web resource request protocol)

  27. Target Processing Environment (cont’d) The Internet/Intranet as an Application Platform • The Internet provides an alternative for implementing systems –E.g. RMO’s buyer can access the system while on the road – the client portion of the application is installed on their laptop computers (uses modem to connect) –Alternatively, using the WWW for accessing the remote site, all the buyer needs is a web browser and is now accessible from any the application’s accessibility and eliminates the need to install custom client software – also cheaper to put up on the Web

  28. Target Processing Environment (cont’d) Advantages of web, intranet or extranet over traditional client-server approaches: • Accessibility –Web browsers and Internet connections are wide spread and are accessible to large numbers of users • Low-cost communication  –High-capacity WANs that form the Internet are funded primarily by governments. –Traffic on the networks travels free of charge to the user. –Connections between private LANs and the Internet can be purchased from a variety of providers at relatively low cost. –Companies can use the Internet as a low-cost WAN • Widely implemented standards –Web standards are well known, and many computer professionals are trained in their use –Server, client and application development software is widely available and relatively cheap –Use of intranet or extranet enjoys all the advantages of web delivery (since the standards are the same) –Really represents evolution of client-server computing to the WWW of-the-shelf technology

  29. Target Processing Environment (cont’d) Negative aspects of application delivery via Internet and web technologies: • Security –Web servers are well-defined target for security breaches because web standards are open and widely known • Reliability –Internet protocols do not guarantee a minimum level of network throughput or that a message will ever be received by its intended recipient • Throughput –Data transfer capacity of many users (home and business) is limited by analog modems to under 56 kilobits per second –Internet WANs become overloaded during high-traffic periods, resulting in slow response time for all users and long delays when accessing large resources • Volatile standards –Web standards change rapidly. Client software is updated every few months. –Developers are always faced with a dilemma: use the latest standards (to increase functionality) or use older standards to ensure greater compatibility with user software

  30. Target Processing Environment (cont’d) Development and System Software Environments • Development environment consists of standards and tools used in an organization (e.g. specific languages, CASE tools, programming standards) • System software environment includes operating system, network protocols, database management systems etc. • Important activity of the analysis phase is to determine the components of the environment that will control the development of the new application Important components of the environment that will affect the project: • Language environment and expertise –Companies often have preferred languages (but they are subject of changes due to technology changes) –Today’s development has numerous languages to choose out: structured languages as COBOL, object-oriented as C++, Visual Basic, to web-based languages like Java and Perl Script or development environments as PowerBuilder –Choosing a new language requires additional work and funding to provide the necessary training to the team

  31. Target Processing Environment (cont’d) • Operating System environment –Strategic goals may exist to change the operating system, especially in client-server and multitier environment –Multiple platforms (i.e. types of computers and system software) and operating systems may be used which create complex requirements for interfaces and communication links –Legacy systems are often still provide transaction support and must be linked to newer client-server applications and databases • Existing CASE tools and methodologies –If a company has invested heavily in a CASE tool then all new development may have to conform to the tools methodology –Using a CASE tool also frequently dictates the implementation language and methodology –Some CASE tools generate programming data structures or code components, which will constrain the development environment

  32. Target Processing Environment (cont’d) • Required interfaces to other systems –A new system typically must provide information to and receive it from existing systems –Often information must be shared across different hardware platforms, operating systems and databases and at various locations –It may require specific mini-projects to define interface requirements and write interface programs • Database management system (DBMS) –Many corporations have committed to a particular database vendor –May require a distributed database environment with portions distributed over the country  –Options exist to link to existing databases or to integrate the data into one large consolidated database –New data warehousing technology may require all new applications to connect to the database –In any case, the database is an important aspect of the processing environment that must be finalized during analysis and before design

  33. Target Processing Environment (cont’d) Rocky Mountain Outfitters Example: the systems environment • Current Environment (see Table 8-1) consists of: –Mainframe located at the home office in Park City –Mail order (in Provo, Utah) and the three warehouse distribution sites (in Salt Lake City, Portland, and Albuquerque) are connected directly to the mainframe to allow real-time connection of terminals –The communication technology is based on high-volume mainframe transaction technology –Mainframe application written in COBOL and DB2 database are used –Dialup telephone lines are used to communicate with the manufacturing sites in Salt Lake City and Portland (each manufacturing facility has its own LAN) –Updates to the central inventory system are done in a batch mode daily via the dial-up connection –The retail stores have local client-server systems that collect sales and financial information through the cash registers (this information is forwarded to the central accounting and financial systems residing on the mainframe in batch mode daily) –The phone-order system in Salt Lake City is a small Windows application running in client-server environment (is not well integrated with the rest of the inventory and distribution systems) – Other applications (human resources and general accounting) are also mainframe-based systems

  34. Target Processing Environment (cont’d) Table 8-1The existing processing environment at RMO.

  35. Target Processing Environment (cont’d) • Proposed Environment – Many decisions are made during strategic planning – In other situations, the strategic plan is modified as new systems are developed –  Table 8-2 sows various environments possible for RMO. The alternatives are listed by type of technology and degree of centralization. – –The first three alternatives consider whether to • Move to Internet technology • Utilize internal LNA/WAN technology • Use a mix of the two options –The next two alternatives focus on the equipment: • Use a mainframe central processor • Distributed client-server processors –Other considerations regard the database technology • Use traditional relational database technology • Or, move to object-oriented databases

  36. Target Processing Environment (cont’d) Table 8-2Processing environment alternatives at RMO.

  37. Target Processing Environment (cont’d) Table 8-3 lists the major components of the strategic direction for RMO • RMO wants to be state-of-the-art • But also wants to avoid high-risk projects • The strategic plan is to – Move away from COBOL mainframe environment – Move to combination of client-server configuration (mainframe will remain as the central database server; the other two tiers are application servers, which will contain business logic as well as Internet server capabilities; the users will have individual client PCs that are connected to the application servers)

  38. Target Processing Environment (cont’d) Table 8-3Strategic directions for the processing environment at RMO.

  39. III. Deciding on Scope and Level of Automation Prioritizing requirements includes defining both the scope and the level of automation  Scope of a system • The scope of the system defines which business functions will be included in the system • A problem with development projects: requests for additional functions continue after the requirements are defined and decisions made • To avoid this problem we need to formalize the process of selecting which functions are critical and which are not • A common approach is to list requested functions and categories them in terms of “mandatory”, “important” and “desirable”. This information is presented in scoping table •Scoping table is a tabular list of all the functions to be included within a system • It is an expanded version of the event table • It may include events from the event table plus some events identified later on (e.g. the event table for CSS may only identify a customer sale while the scoping table might need to distinguish among mail sales, telephone sales and Internet sales)

  40. Scope and Level of Automation (cont’d) • Each business function can be prioritized in terms of – Mandatory – Important – Desirable Table 8-4 shows the scoping table for CSS (the additional functions are indicated by background shading) Defining Level of Automation •Level of automation is a description of the kind of support the system will provide for each function • For each function at least three levels can be defined: low, middle, and high • A low level is characterized by the following features: - the computer system only provides simple record keeping - data input screens capture information and insert it into a database          - simple types of field edits and validations on input data are included          - the system date may be used for the order date          - line items for the order are entered manually          - the system may or may not automatically calculate the price          - usually stock on hand are not verified          - at the end of entering the order, the information is stored in the database

  41. Scope and Level of Automation (cont’d) Table 8-4Scoping list of potential functions for RMO (shading indicates new additions).

  42. Scope and Level of Automation (cont’d) • A middle-range level of automation is a combination of features from the high-level and low-level alternatives (usually it is a compromise of what is necessary and what is justified at the current stage of technology and budget) • A high level occurs when system takes over as much as possible the processing of the function (it is more difficult for an analyst to define high-end automation the low-end automation, since low-end automation is basically automated version of a current manual procedure) • High-end often involves creating new processes and procedures

  43. Scope and Level of Automation (cont’d) Rocky Mountain Outfitters – example of functions of a high-end system –Customers can access catalog on-line with full-color, 3D pictures over the web –The catalog is also interactive and allows customer to combine items –The user interface to the catalog is voice-activated –The system should make suggestions of related items that customer may need or desire to purchase –The system should verify all items are in stock –Items not in stock should be immediately ordered from the manufacturer or other supply sources –Payment is verified on-line –The customer can see a history of all prior orders and can check the status of any order over the web or telephone • All the named capabilities can be supported with current technologies. The question is whether RMO can justify the cost at this point in time. • Table 8-5 expands Table 8-4 by replacing the description column with three columns to show various levels of automation (provides overview of functions, their priority and methods of implementation at the different levels of automation)

  44. Scope and Level of Automation (cont’d) Table 8-5Preliminary selection of alternative functions with three level of automation for RMO (selections are shaded).

  45. Scope and Level of Automation (cont’d) Selecting Alternatives • Currently more and more new systems are being built to provide high-level automation solutions • Defining scope and level of automation are getting critical and important aspect of the system development • Criteria used to decide which functions to support and at which level of automation are based on – Strategic IT plan – Feasibility study (as considered in Lecture 2), which includes (1) Economic feasibility (2) Operational, organizational and cultural feasibility (3) Technological feasibility (4) Schedule and resource feasibility

  46. Scope and Level of Automation (cont’d) Evaluation of Alternatives for RMO example • Based on preliminary budget and resource availability, the project team decided to include all functions that were classified as mandatory or important • For each of those functions, a detailed analysis was done to determine level of automation • Table 8-6 lists the functions and shows by shading which functions are to be included and at which level of automation • The low-level of automation was not acceptable since most of the current system already provided this level of automation • For most functions, a medium level of automation was selected (see shaded boxes) since the high-end automation is not within the budget for RMO at this point in time.

  47. Table 8-6Selection of alternative functions and level of automation (selections are shaded).

  48. IV. Generating Implementation Alternatives • After deciding on scope and level of automation, a system needs to be designed and programmed • Options include buying a computer program if the application is fairly standard OR the company may decide to build the system from ground up (in-house or inviting outside contract programmers) • Figure 8-11 shows implementation alternatives in graphical form. – Vertical column is the build-versus-buy axis (at the top of the axis the entire system is bought as a package; at the bottom of this axis, the entire system is built from the ground up; in between is a combination of buy and build – The horizontal axis shows of developing the system in-house versus outsourcing the project – The diagram illustrates that the various alternatives all combine aspects of new building, buying, in-house development and outsourcing.

  49. Generating Implementation Alternatives (cont’d) Figure 8-11Implementation alternatives.

  50. Generating Implementation Alternatives (cont’d) Facilities Management •Facilities management is an organization’s strategic decision to move all development and all data processing and information technology to an outside provider –E.g. a bank may hire a facilities management firm to provide all of the data processing including software, networks and even technical staff • Typically this solution is a part of a long-term strategic plan for an entire organization (applies to the entire organization but not just a single development project) • Usually it is a top executive decision • Contracts cost a lot (millions) – example EDS (Electronic Data Systems) that provides Facilities Management services to many industries, e.g. banking, health-care, insurance, government

More Related