Overview
J2EE provides many architectural choices. J2EE also offers many component types (such as servlets, EJBs, JSP pages, and servlet filters), and J2EE application servers provide many additional services. While this array of options enables us to design the best solution for each problem, it also poses dangers. J2EE developers can be overwhelmed by the choices on offer, or can be tempted to use infrastructure inappropriate to the problem in hand, simply because it's available.
In this chapter, we discuss the high-level choices in developing a J2EE architecture, and how to decide which parts of J2EE to use to solve real problems. We'll look at:
-
Distributed and non-distributed applications, and how to choose which model is appropriate
-
The implications for J2EE design of changes in the EJB 2.0 specification and the emergence of web services
-
When to use EJB
-
Data access strategies for J2EE applications
-
Four J2EE architectures, and how to choose between them
-
Web tier design
-
Portability issues
This book reflects my experience and discussions with other enterprise architects. I will attempt to justify the claims made in this chapter in the remainder of the book. However, there are, necessarily, many matters of opinion.
In particular, the message I'll try to get across will be that we should apply J2EE to realize OO design, not let J2EE technologies dictate object design.
Summary
In this chapter we've considered some of the most important choices to be made in J2EE development projects, other than the architectural decisions we considered in Chapter 1. We've looked at:
-
How to choose an application server. One of the strengths of the J2EE platform is that it allows a choice of competing implementations of the J2EE specifications, each with different strengths and weaknesses. Choosing the appropriate application server will have an important influence on a project's outcome. We've looked at some of the major criteria in choosing an application server, stressing the importance of considering the specific requirements, rather than marketing hype. We've seen the importance of choosing an application server early in the project lifecycle, to avoid wasting resources getting up to speed with multiple servers. We've considered the issue of total cost of ownership, of which license costs are just a part.
-
Managing the technology mix in an enterprise. While an unnecessary proliferation of different technologies will make maintenance more expensive forever, it's important to recognize that J2EE isn't the best solution to all problems in enterprise software development. We should be prepared to use other technologies to supplement J2EE technologies where they simplify implementation.
-
Practical issues surrounding J2EE portability. We've seen how to ensure that we don't unintentionally violate the J2EE specifications, by regularly running the verification tool supplied with Sun's J2EE Reference Implementation, and how to ensure that application design remains portable even if we have good reason to use proprietary features of the target platform.
-
Release management practices. We've seen the importance of having distinct Development, Test, and Production environments, and the importance of having a well-thought-of release management strategy.
-
Issues in building and managing a team for a J2EE project. We've considered the implications of using a "Chief Architect," as opposed to a more democratic approach to architecture, and considered two common team structures: the "vertical" structure, which uses generalists to implement whole use cases, and the "horizontal" structure, which focuses developers on individual areas of expertise. We've considered a possible division of roles in the "horizontal" team structure.
-
Development tools. We've briefly surveyed the types of tools available to J2EE developers. We've stressed the importance of the Ant build tool, which is now a de facto standard for Java development.
-
Risk management. We've seen that successful risk management is based on identifying and attacking risks early in the project lifecycle. We've discussed some overall risk management strategies, and looked at several practical risks to J2EE projects, along with strategies to manage them.
Definitions
Let's briefly define some of the concepts we'll discuss in this chapter:
-
Unit tests
Unit tests test a single unit of functionality. In Java, this is often a single class. Unit tests are the finest level of granularity in testing, and should test that each method in a class satisfies its documented contract. -
Test coverage
This refers to the proportion of application code that is tested (usually, by unit tests). For example, we might aim to check that every line of code is executed by at least one test, or that every logical branch in the code is tested. -
Black-box testing
This considers only the public interfaces of classes under test. It is not based on knowledge of implementation details. -
White-box testing
Testing that is aware of the internals of classes under test. In a Java context, white-box testing considers private and protected data and methods. It doesn't merely test whether the class does what is required of it; it also tests how it does it. I don't advocate white-box testing (more of this later). White-box testing is sometimes called "glass-box testing". -
Regression tests
These establish that, following changes or additions, code still does what it did before. Given adequate coverage, unit tests can serve as regression tests. -
Boundary-value tests
These test unusual or extreme situations that code under test should be able to handle (for example, unexpected null arguments to a method). -
Acceptance tests (sometimes called Functional tests)
These are tests from a customer's viewpoint. An acceptance test is concerned with how the application meets business requirements. While unit tests test how each part of an application does its job, acceptance tests ignore the implementation details and test the ultimate functionality, using concepts that make sense to a user (or customer, in XP terminology).
-
Load tests
These test an application's behavior as load increases (for example, to simulate a greater population of users). The aim of load testing is to prove that the application can cope with the load it is expected to encounter in production and to establish the maximum load it can support. Load tests will often be run over long periods of time, to test stability. Load testing may uncover concurrency issues. Throughput targets are an important part of an application's non-functional requirements and should be defined as part of business requirements. -
Stress tests
These go beyond load testing to increase load on the application beyond the projected limits. The aim is not to simulate expected load, but to cause the application to fail or exhibit unacceptable response times, thus demonstrating its weak links from the point of view of throughput and stability. This can suggest improvements in design or code and establish whedier overloading the application can lead to erroneous behavior such as loss of data or crashing.
No comments:
Post a Comment