How Managers Plan for Performance
This article provides ten performance planning tips to ensure that your application is fully implemented . Recognize that applications require performance tuning at various stages of the project, and that tuning requirements requires resources and time, and you should plan accordingly. The following suggestions may be helpful in determining how to budget for this . Develop in-house performance experts It's great to have two developers who understand performance tuning, but the overall budget will be cheaper if you have in-house resources. At the very least, your in-house resources will be able to deal with the basics - setting up baseline rules, stating response time goals, and so on - even if you later hire an in-house resource to handle the performance tuning. --Even if you later hire an expert to validate the process or provide better advice, there's a wealth of interesting Java performance tuning materials and tools out there for Java projects, so performance tuning may be considered a positive task by developers. The Java Performance Tuning website lists a number of resources for Java performance tuning, and can be a good place for your developers to start. Like budgeting for books and magazines, budget for training and web browsing time, and also for testing and purchasing different testing tools. Client-side emulation tools These choices of tools for testing performance and creating baselines will incur different costs and times when tuning Be prepared for this and make sure you have the right approach to make the right choices for your needs Understanding performance tuning and estimation tools may not be a developer's top priority If things are going well, it will probably never be a top priority, but in my experience, the closer you get to the end of a project, the more you have to do to get to the end of the project, the more you have to do to get to the end of the project. The closer you get to the end of a project, the more time internal performance experts have to spend on performance tuning . Defining performance requirements in the specification The performance requirements of an application need to be defined at the specification stage, which is not the primary task of the developer, but the customer and business experts need to determine what response times are acceptable, and it may be better to start by declaring what response times are unacceptable. This task may be taken on much later in development, and in fact, it may be easier if the prototype has already been written, using prototypes, and the performance of the application. The prototype has already been written. Use the prototype and other business information to declare acceptable response times. But remember, don't neglect to declare response time requirements before starting any implementation performance tuning. If code tuning starts before performance requirements are declared, then the goals to be achieved will be improperly defined, and the tuning will be wasted in places where the application doesn't need it. If your development environment is tier-based (application tier, component tier, technology architecture tier), try defining performance specifications at each tier so that each team has its own performance goals to achieve. If not, performance experts will need to be able to tune across tiers and communicate with all teams . In the analysis phase, the main focus is to analyze the requirements for the application's *** shared and limited resources. For example, a network connection is both a *** shared and a limited resource, a database table is a *** shared resource, a thread is a limited resource, and if it is not designed correctly, it will take the most resources to install it later. Data volumes and load carrying capacity should be analyzed to determine the limitations of the system. This task should be integrated into the normal analysis phase. On the side of safety or to highlight performance analysis needs, you may want to allocate % of the planned analysis time for performance analysis. The analysis team understands that the impact of different design choices on performance is important, so by understanding this, they won't miss out on aspects of the system that need to be analyzed. A book on designing for performance goals, such as High Performance Client/Server (see more on this) The analysis should be done in conjunction with a technical architecture analysis At the end of the analysis, you should have an architectural blue book that clearly identifies the performance aspects of the system . The need for performance prediction from design During the design phase, the progression of performance considerations in the analysis phase should focus on the ****-enabled resources used by the application and the performance position of the physical architecture of the application that has been considered for deployment Ensure that the designer is aware of the performance consequences of the different decisions that have been made that require a prediction of the performance impact that should encompass all aspects of the normal design. The objective design validation should include input from a performance design expert who is well versed in the design choices to be made Additionally, a second-hand performance expert who is well versed in the design should examine the application design If a significant third-party product is applied-such as a middleware or database product-the product vendor should have a performance expert who is able to validate the design and identify potential problems. The expert should be able to validate the design and identify potential performance problems In order to emphasize the importance of performance, it is always a safe choice to allocate a % of the budget to performance. The design should include information about the scalability of the user and data/object volumes The number of possible application distributions is dependent on the level of demand for messages from the two distributions Transaction mechanisms and patterns (optimistic pessimistic Requirement of locks During transactions and locks are active) for multi-user applications. Theoretical performance limitations are the number of ****-enjoyed resources and the lock validity period If relevant The design should also include a section on handling queries for large data sets . The performance task at the beginning of the development phase is to set up a performance test environment (code performance debugging time should be determined at the end of the development phase, refer to point 1). You need to - Declare a baseline functionality based on the specification phase and the required response time (point 1) - Ensure a reasonably accurate test environment for the system - Establish rules for the test environment for the only performance test If the test environment is ****-enabled, performance testing should not occur at the same time as other activities - Purchase or create a performance testing tool that can drive the application with simulated users and external activity drivers - Create reusable performance tests that provide reproducible application activity (note that this is not a QA test; testing should not always test for failure modes of the system unless all normal activity is within expected limits). - Prepare the test and monitoring environment (this is a normal system detail and usually progresses as the test progresses) You will end up needing both network stats and application level performance (discussed further at this point) as well as performance monitoring tools or scripts to monitor potential system performance - Plan for code versioning and releases according to your performance test plan, from your development environment to your performance environment (note that this is not QA). Plan for code versioning and releases from your development environment to your performance environment (note that this often requires a round of patching to run the tests rigorously and time constraints often mean that waiting for a full QA certificate is not possible, so some developer support will be required and should be planned for) . Create a simulation of the system for an acceptance test simulation or framework system The simulation should represent the main components of the application realistically. The simulation should be implemented so that you are able to test the scalability of the system, determine how ****-enjoyed resources will respond to an increasing load, and identify the point at which constrained resources start to run out or become a bottleneck. If budgetary resources are not available, the initial simulation can be skipped, but testing should begin as soon as there are enough components available to implement a framework version of the system. The purpose of this is to determine the response time and scalability of the system as early as possible for design acceptance feedback If you have a planned proof-of-concept phase that can provide a simulation, or a good basis for a simulation, ideally acceptance will occur as part of the proof-of-concept. Ideally, acceptance will occur as part of the proof of concept . Integrate performance logs into application layer boundaries Integrate performance records into the application These records should be deployed with the published application Performance logs should be added to all major layer boundaries I/O and marshaling layers Small application I/O and marshaling JVM server I/O and marshaling DB access and update Transaction boundaries and the like should be designed so that the performance log can add at least % of time to all application behavior Ideally, it should be configured to aggregate a large amount of data so that the records can be configured to produce a summary record line (e.g., one summary line per minute) for each configurable time unit. For easier manipulation and analysis, your logging structure should be perfectly designed so that the output can be used in other tools, and so that it can be used in a variety of ways. Use the Multi-Level Scoping Performance Test System. Test the system for performance on a multi-level scale and use the results to make adjustments Unit performance testing should be scheduled with QA during code implementation. Unit level performance tuning will not be required until QA is ready for it. It is important to test the entire system or simulate it, even if many of the units are imperfect It is acceptable to simulate the units early in the system performance test Initially, the purpose of the system performance test is to validate the design and architecture as well as to characterize any portion of the design or implementation that will not scale (point and click) These allow the developer to directly identify bottlenecks in the system and produce faster versions of the application. In order to support performance testing in the final stages, it should be possible to set up a testbed that provides not only a performance summary file for any JVM processes, but also system and network statistics in addition to performance logs. Ideally, the system administrator will already have these techniques in place Performance testing should measure higher loads on users and data Two tests of expected peak loads - with expected data volume peaks and expected user peaks Two tests of expected throughput peaks - with expected throughput peaks and expected user peaks Two tests of expected data volume peaks - with expected data volume peaks and expected throughput peaks Two expected user peaks User activity should be simulated as accurately as possible, but it is of the utmost importance that the simulated data produces the kind of data that is expected to be realistic, otherwise the cache activity can produce completely misleading results Measure a reasonable number of objects against a reasonable number of objects. This is especially important for retrieval tests and bulk updates Never underestimate the complexity of creating a large amount of real world data to measure a test Combining Performance The performance logging feature should be deployed with the released application. The logs allow remote analysis and continuous performance monitoring of the deployed application. It is best to write your own application tools to automatically analyze the performance logs. The simplest acceptable performance logging tool is one that compares the performance of the logs with a set of reference logs and highlights anomalies. Some useful tools include one that identifies long-term tendencies in performance logs, one that identifies when idiosyncratic performance scales are out of range, and one that has a graphical interface or supports standard GUI management tools. lishixinzhi/Article/program/Java/gj/201311/27414