Yesterday we tried to update our Atlassian Crowd server to 2.1.0. Crowd is a SSO and identity management solution from Atlassian. Updating Crowd normally is a breeze, but not this time. At my current client we run Crowd on JBoss 4.2.0.GA with a Sun JDK 1.5.0_22. We did a compatibility check if Crowd 2.1.0 is compatible with JDK 1.5. According to the documentation Crowd should run fine on JDK 1.5.
We deployed the new Crowd war file to JBoss, started the server and tailed the log file. Everything looked ok, Crowd deployed succesfull but when we tried logging in to Crowd the server.log displayed the following exception:
Servlet.service() for servlet default threw exception java.lang.NoSuchMethodError: java.lang.String.isEmpty()Z at com.atlassian.crowd.integration.http.util.CrowdHttpTokenHelperImpl.buildCookie(CrowdHttpTokenHelperImpl.java:194) at com.atlassian.crowd.integration.http.util.CrowdHttpTokenHelperImpl.setCrowdToken(CrowdHttpTokenHelperImpl.java:149) at com.atlassian.crowd.integration.http.HttpAuthenticatorImpl.setPrincipalToken(HttpAuthenticatorImpl.java:113)
This exception indicates that the
CrowdHttpTokenHelperImpl.buildCookie uses the
isEmpty method from the class java.lang.String but it does not exist. And that is correct because the isEmpty method was introduced in JDK 1.6.
I decompiled the crowd-integration-client-common-2.1.0.jar which contains the code throwing the exception:
CrowdHttpTokenHelperImpl.buildCookie. Line 194 contains the following:
if ((domain != null) && (!domain.isEmpty()) && (!"localhost".equals(domain)))
This is indeed JDK 1.6 code that is being called. I filed a bug report at Atlassian: CWD-2143.
The problem only occures when Crowd is configured to use SSO. The piece of codes reads the SSO cookie and does some validation.
So be warned if you want to update your Crowd instance that runs on JDK 1.5!
Firewalls protect backend resources, such as databases in multiple machine systems. You can also use firewalls to protect Application Servers and Web servers from unauthorized outside access. A demilitarized zone (DMZ) configuration involves multiple firewalls that add layers of security between the Internet and critical data and business logic.
A wide variety of topologies are appropriate for a DMZ environment, the basic locations of elements in a simple DMZ topology follow.
The main purpose of a DMZ configuration is to protect the business logic and data in the environment from unauthorized access. A typical DMZ configuration includes:
- An outer firewall between the public Internet and the Web server or servers processing the requests originating on the company Web site.
- An inner firewall between the Web server and the Application Servers to which it is forwarding requests. Company data also resides behind the inner firewall.
The area between the two firewalls gives the DMZ configuration its name. Additional firewalls can further safeguard access to databases holding administrative and application data.
Avoids critical business data in the DMZ. A DMZ configuration protects application logic and data, by creating a buffer between the public Internet Web site and the internal intranet, where Application Servers and the data tier reside. Desirable DMZ topologies do not have databases or application servers with critical business data in the DMZ.
Supports Network Address Translation (NAT). A firewall product that runs NAT receives packets for one IP address, and translates the headers of the packet to send the packet to a second IP address. In environments with firewalls employing NAT, avoid configurations involving complex protocols in which IP addresses are embedded in the body of the IP packet, such as Java Remote Method Invocation (RMI) or Internet Inter-Orb Protocol (IIOP). These IP addresses are not translated, making the packet useless.
Avoids the DMZ protocol switch. The Web server sends HTTP requests to Application Servers behind firewalls. It is simplest to open an HTTP port in the firewall to let the requests through. Configurations that require switching to another protocol, such as IIOP, and opening firewall ports corresponding to the protocol, are less desirable. They are often more complex to set up, and the protocol switching overhead can impact performance.
Allows an encrypted link between Web server and Application Server. Configurations that support encryption of communication between the Web server and application server reduce the risk that attackers are able to obtain secure information by sniffing packets sent between the Web server and Application Server. A performance penalty usually accompanies such encryption.
Avoids a single point of failure. A point of failure exists when one process or machine depends on another process or machine. A single point of failure is especially undesirable because if the point fails, the whole system becomes unavailable. When comparing DMZ solutions, a single point of failure refers to a single point of failure between the Web server and Application Server. Various failover configurations can minimize downtime and possibly even prevent a failure. However, these configurations usually require additional hardware and administrative resources.
Minimizes the number of firewall holes. Configurations that minimize the number of firewall ports are desirable because each additional firewall port leaves the firewall more vulnerable to attackers.
Reverse proxy (IP forwarding)
Reverse proxy, or IP-forwarding topologies use a reverse proxy server to receive incoming HTTP requests and forward them to a Web server. The Web server forwards the requests to the Application Servers for actual processing. The reverse proxy returns completed requests to the client, hiding the originating Web server.
The following figure shows a simple reverse proxy topology.
In this example, a reverse proxy resides in a demilitarized zone (DMZ) between the outer and inner firewalls. It listens on an HTTP port, typically port 80, for HTTP requests. The reverse proxy then forwards such requests to an HTTP server that resides on the same machine as WebSphere Application Server. After the requests are fulfilled, they are returned through the reverse proxy to the client, hiding the originating Web server.
Reverse proxy servers are typically used in DMZ configurations to provide additional security between the public Internet and the Web servers (and application servers) servicing requests.
Reverse proxy configurations support high performance DMZ solutions that require as few open ports in the firewall as possible. The reverse proxy capabilities of the Web server inside the DMZ require as few as one open port in the second firewall, potentially two if using Secure Sockets Layer (SSL) – port 443.
Advantages of using a reverse proxy server in a DMZ configuration include:
- The reverse proxy server does not need database access through the firewall.
- The reverse proxy configuration supports WebSphere Application Server security and NAT firewalls.
- The basic reverse proxy configuration is well known and tested in the industry, resulting in less customer confusion than other DMZ configurations.
- The reverse proxy configuration is reliable and its performance is relatively fast.
- The reverse proxy configuration eliminates protocol switching, by using the HTTP protocol for all forwarded requests.
- The reverse proxy server uses only one HTTP firewall port for requests and responses.
The reverse proxy configuration is also a disadvantage in some environments where security policies prohibit using the same port or protocol for inbound and outbound traffic across a firewall.
Disadvantages of using a reverse proxy server in a DMZ configuration include the following:
- The presence of a reverse proxy server in a DMZ is not suitable for some environments.
- The reverse proxy configuration requires more hardware and software than similar topologies that do not include a reverse proxy server, which makes it more complicated to configure and maintain.