AdSense Mobile Ad

Showing posts with label java ee. Show all posts
Showing posts with label java ee. Show all posts

Sunday, March 27, 2011

Wrapping ReCaptcha in a Custom JSF Component (With Facelets Support)

A couple of days ago I decided to protect a form in a web application of ours with a CAPTCHA and, after spending some hours evaluating the available options, I chose Google's reCaptcha.

Since the form I had to protect is presented by a Java EE 6 Web Module using Facelets, I decided that the best course of action was:
  • Using a reCaptcha Java library, as suggested by the reCaptcha developer's documentation.
  • Wrapping the library functionality into a custom JSF component, so that it could be easily reused in any Java EE web application.
  • Writing a Facelets tag library descriptor so that I could use my component from both JSP and Facelets pages.
Please beware that, for the sake of clarity, some error management and logging code has been omitted from the examples below. 

    reCaptcha Java Library

    The contributed reCaptcha Java library described in the reCaptcha developer's documentation is really easy to use but it does not offer the adequate tools that a Java EE developer needs. To summarize, it requires you to add Java scriptlets into a JSP page, something awful to see and awkward to maintain.

    With this code you can create your reCaptcha instance:

    ReCaptcha c = ReCaptchaFactory.newReCaptcha("your_public_key", "your_private_key", false);

    and with the createRecaptchaHtml method you can have it create the required HTML to show the reCaptcha widget:

    c.createRecaptchaHtml(null, null);

    Once the client submits the form where the widget is shown, the following piece of code can be used to determine whether the value introduced by the user is correct or not:

    String remoteAddr = request.getRemoteAddr();
    ReCaptchaImpl reCaptcha = new ReCaptchaImpl();
    reCaptcha.setPrivateKey("your_private_key");

    String challenge = request.getParameter("recaptcha_challenge_field");
    String uresponse = request.getParameter("recaptcha_response_field");
    ReCaptchaResponse reCaptchaResponse = reCaptcha.checkAnswer(remoteAddr, challenge, uresponse);

    reCaptchaResponse.isValid();

    No doubt a custom tag would be much easier to use.

    A reCaptcha Custom Tag

    We would like to be able to write something like this in our pages, instead:

    <rc:recaptcha ... />

    The required parameters that the library needs are the two reCaptcha keys, so our custom tag would end up like:

    <rc:recaptcha id="..." publicKey="..." privateKey="..." />

    The basics files you need to build a custom JSF components are:
    • The component implementation file.
    • The component's renderer.
    • The component's tag handler.

    The Component

    Since users will input data through the component, our component class will extend the JSF UIInput class. Although we won't ever read what the user inputs, we use this class so that we can take advantage of the JSF infrastructure during the validation phase.

    Into our component skeleton, we create the setter methods so that the reCaptcha keys can be configured:

    public class RecaptchaComponent extends UIInput {

      static final String RecaptchaComponent_FAMILY = "RecaptchaComponentFamily";
      private String publicKey;
      private String privateKey;

      @Override
      public final String getFamily() {
        return RecaptchaComponent_FAMILY;
      }

      public void setPublicKey(String s) {
        publicKey = s;
      }

      public void setPrivateKey(String s) {
        privateKey = s;
      }

      public String getPublicKey() {
        if (publicKey != null)
          return publicKey;
           
        ValueExpression ve = this.getValueExpression("publicKey");
        if (ve != null) {
          return (String)ve.getValue(getFacesContext().getELContext());
        } else {
          return publicKey;
        }
      }

      public String getPrivateKey() {
        if (privateKey != null)
          return privateKey;

        ValueExpression ve = this.getValueExpression("privateKey");
        if (ve != null) {
          return (String)ve.getValue(getFacesContext().getELContext());
        } else {
          return privateKey;
        }
      }
    }

    As you may notice, since we want to accept EL expressions to set the reCaptcha keys, the corresponding getters take that into account and retrieve the value from a value expression if the user didn't use a literal.

    The Renderer

    The component renderer is pretty simple, since the HTML will be produced by the reCaptcha Java library:

    public class RecaptchaComponentRenderer extends Renderer {

      static final String RENDERERTYPE = "RecaptchaComponentRenderer";

      @Override
      public void decode(FacesContext context,
        UIComponent component) {
        if (component instanceof UIInput) {
          UIInput input = (UIInput) component;
          String clientId = input.getClientId(context);

          Map requestMap =
            context.getExternalContext().getRequestParameterMap();
          String newValue = (String) requestMap.get(clientId);
          if (null != newValue) {
            input.setSubmittedValue(newValue);
          }
        }
      }

      @Override
      public void encodeBegin(FacesContext ctx,
        UIComponent component) throws IOException {
      }

      @Override
      public void encodeEnd(FacesContext ctx,
        UIComponent component)
        throws IOException {
       
        if (component instanceof RecaptchaComponent) {       
          RecaptchaComponent rc = (RecaptchaComponent) component;
          String publicKey = rc.getPublicKey();
          String privateKey = rc.getPrivateKey();
          if (publicKey == null || privateKey == null) {
            throw new IllegalArgumentException("reCaptcha keys cannot be null. This is probably a component bug.");
          }

          ReCaptcha c = ReCaptchaFactory.newReCaptcha(publicKey, privateKey, false);
          String createRecaptchaHtml = c.createRecaptchaHtml(null, null);
          ResponseWriter writer = ctx.getResponseWriter();
          writer.write(createRecaptchaHtml);
        }
      }
    }

    The Component Tag Handler

    You can think of the tag handler as the intermediary between the custom tag you use in the page and the component class. It is basically responsible for getting and setting the component's properties.

    In this case, we will need the two setters to retrieve the reCaptcha keys and they must accept a ValueExpression object. When the tag handler sets the component's properties, it will check if the value are literals or value expressions and act accordingly:
    • If they're literals, it will set them into the components fields.
    • If they're value expressions, it will store them into the map of value expressions of the component for later evaluation (as seen in the component's code).

    The code will be such as:

    public class RecaptchaComponentTag extends UIComponentELTag {
      private ValueExpression publicKey;
      private ValueExpression privateKey;

      public void setPublicKey(ValueExpression s) {
        publicKey = s;
      }

      public void setPrivateKey(ValueExpression s) {
        privateKey = s;
      }

      @Override
      public String getComponentType() {
        return RecaptchaComponent.RecaptchaComponent_FAMILY;
      }

      @Override
      public String getRendererType() {
        return RecaptchaComponentRenderer.RENDERERTYPE;
      }

      @Override
      protected void setProperties(UIComponent component) {
        super.setProperties(component);
     
        if (component instanceof RecaptchaComponent) {
          RecaptchaComponent c = (RecaptchaComponent) component;

          if (publicKey != null) {
            if (publicKey.isLiteralText()) {
              c.setPublicKey(publicKey.getExpressionString());
            } else {
              c.setValueExpression("publicKey", publicKey);
            }
          }

          if (privateKey != null) {
            if (privateKey.isLiteralText()) {                
              c.setPrivateKey(privateKey.getExpressionString());
            } else {
              c.setValueExpression("privateKey", privateKey);
            }
          }
        }
      }
    }

    The Validator

    When the user submits the form, we need to check if the value he entered was correct or not. JSF input components undergo a validation phase during their life cycle, so that we can perform the reCaptcha validation during this phase.

    To do that, we need an instance of the Validator class and implement the validate method accordingly:

    public class RecaptchaValidator implements Validator {

      public void validate(FacesContext context, UIComponent component, Object value) throws ValidatorException {
        HttpServletRequest request = (HttpServletRequest) context.getExternalContext().getRequest();

        if (component instanceof RecaptchaComponent) {
          RecaptchaComponent c = (RecaptchaComponent)component;
          String remoteAddr = request.getRemoteAddr();
          ReCaptchaImpl reCaptcha = new ReCaptchaImpl();
          reCaptcha.setPrivateKey(c.getPrivateKey());

          String challenge =
            request.getParameter("recaptcha_challenge_field");
          String uresponse =
            request.getParameter("recaptcha_response_field");
          ReCaptchaResponse reCaptchaResponse =
            reCaptcha.checkAnswer(remoteAddr, challenge, uresponse);

          if (!reCaptchaResponse.isValid()) {
            throw new ValidatorException(
              new FacesMessage(FacesMessage.SEVERITY_ERROR, "Invalid captcha", "Invalid captcha"));
          }
        }
      }
    }

    As it can be seen, we just use the reCaptcha Java library code to perform the check.

    Use the Validator Into the Component

    To use the validator as a default validator, we can simply add it to the list of validator of our component during the component creation:

    public RecaptchaComponent() {
        super();
        addValidator(new RecaptchaValidator());
    }

    Also, we need to override the UIInput validate method so that our validator be called;

    @Override
    public void validate(FacesContext ctx) {
      Validator[] validators = getValidators();
      for (Validator v : validators) {
        try {
          v.validate(ctx, this, null);
        } catch (ValidatorException ex) {
          setValid(false);
          FacesMessage message = ex.getFacesMessage();
          if (message != null) {
            message.setSeverity(FacesMessage.SEVERITY_ERROR);
            ctx.addMessage(getClientId(ctx), message);
          }
        }

        super.validate(ctx);
      }
    }

    The JSF Configuration File

    We now must tell JSF about this new component. Since we distribute it into a standalone JAR, we add a faces-config.xml file into the META-INF directory of the archive with this content:

    <?xml version='1.0' encoding='UTF-8'?>
    <faces-config version="2.0"
      xmlns="http://java.sun.com/xml/ns/javaee"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd">
      <component>
        <component-type>RecaptchaComponentFamily</component-type>
        <component-class>es.trafico.jsf.component.RecaptchaComponent</component-class>
      </component>

      <render-kit>
        <renderer>
          <description>Renderer.</description>
          <component-family>RecaptchaComponentFamily</component-family>
          <renderer-type>RecaptchaComponentRenderer</renderer-type>
          <renderer-class>
            es.trafico.jsf.component.RecaptchaComponentRenderer
          </renderer-class>
        </renderer>
      </render-kit>

      <validator>
        <validator-id>recaptchaValidator</validator-id>
        <validator-class>
          es.trafico.jsf.component.RecaptchaValidator
        </validator-class>
      </validator>
      
    </faces-config>

    The Tag Library Descriptor

    To be able to use our new custom component in our JSP pages, we need a tag library descriptor. Since we're packaging our custom component in a standalone JAR file so that it can be used in all our applications, the tag library descriptor must be placed into the META-INF directory of the archive.

    Our taglib.tld file is like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <taglib version="2.1" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd">
      <tlib-version>1.0</tlib-version>
      <short-name>ReCaptcha-Library</short-name>
      <uri>http://www.reacciona.es/rc</uri>

      <tag>
        <name>recaptcha</name>
        <tag-class>
          es.trafico.jsf.component.RecaptchaComponentTag
        </tag-class>
        <body-content>empty</body-content>
        <attribute>
          <name>id</name>
          <required>false</required>
          <rtexprvalue>true</rtexprvalue>
        </attribute>
        <attribute>
          <name>publicKey</name>
          <required>true</required>
          <deferred-value>
            <type>java.lang.String</type>
          </deferred-value>
        </attribute>
        <attribute>
          <name>privateKey</name>
          <required>true</required>
          <deferred-value>
            <type>java.lang.String</type>
          </deferred-value>
        </attribute>
      </tag>
    </taglib>

    Using The Component

    You can now use your new tag library in your JSP page adding this directory at the beginning of the page:

    <%@taglib prefix="rc" uri="http://www.reacciona.es/rc"%>

    You can now add your custom component instances into your JSP pages (with JSF) this way:

    <f:view>
    ...
      <h:form>
        <rc:recaptcha id="rc"
          publicKey="#{yourBean.publicKey}"
          privateKey="#{yourBean.privateKey}"/>
        <h:message for="rc" />
      </h:form>
    </f:view>

    In this example, we use a value expression to set both the reCaptcha keys using two properties of a managed bean of yours. This way, you won't hardcode any parameter into your pages.

    Using the Custom Component With Facelets

    You won't be able to use the custom component into a Facelets page until you write a Facelets tag library descriptor. You might be wondering why so many indirection layers: Facelets is designed to be more flexible than JSP and its tag descriptors reflect this design decision. Instead of having to describe your component down to the smallest detail, Facelets lets your just declare its existence and, for example, will set attributes at runtime, inspecting the component and tag classes.

    A basic Facelets tag library descriptor for the component is the following:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE facelet-taglib PUBLIC
      "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN"
      "http://java.sun.com/dtd/facelet-taglib_1_0.dtd">

    <facelet-taglib>
      <namespace>http://www.reacciona.es/rc</namespace>
      <tag>
        <tag-name>recaptcha</tag-name>
        <component>
          <component-type>RecaptchaComponentFamily</component-type>
          <renderer-type>RecaptchaComponentRenderer</renderer-type>
        </component>
      </tag>
    </facelet-taglib>

    Please note that both the component and the renderer type are just the identifiers we declared in the JSF configuration file.

    We can now use the custom component into a Facelets page adding the following namespace into the html tag at the beginning of the page:

    xmlns:rc="http://www.reacciona.es/rc"

    The syntax to add the component is the same syntax that we used in the JSP example.

    Tuesday, March 1, 2011

    Upgrading Oracle Glassfish Server to v. 3.1

    Oracle Corporation just released Oracle Glassfish Server v. 3.1: this release includes many interesting features and I started upgrading as soon as I could allocate some time. The official Glassfish Server 3.1 Upgrade Guide can be found here.

    In a very recent post, I described how you can upgrade your Glassfish instance to v. 3.0.1 using the tools provided by the application server. With the release of version 3.1 things haven't changed so much so that, if you're in a hurry, you can just follow those instructions.

    Nevertheless, I'll take advantage of this news to summarize which are the possible upgrade paths for your Glassfish instances so that you can choose the one that best fits your needs:
    • Upgrade tool.
    • Update tool.
    • pkg tool.
    • Software Update Notifier.

    Upgrade Tool

    The upgrade tool let us perform side-by-side upgrades and it's the only tool that can be used to upgrade a Glassfish instance earlier than Glassfish v. 3.0.1 or Enterprise Server v. 3.0.

    The ability to perform side-by-side upgrades is important in the cases in which you do not want to upgrade your existing installation. In the case something goes wrong during the upgrade procedure, the original instance is not modified and the administrator is free to try the upgrade procedure again or restart the old instance until further investigation is done.

    The upgrade tool, installed on $GF_INST/glassfish/bin/asupgrade, can be used both in graphical or command line mode. To use this tool, both the source and the target server directories must be accessible from the system in which asupgrade is launched and the user must have read permission on the source and read and write permission on the target directory.

    Before launching the tool, the Glassfish domain to be upgraded must be stopped. Also, the official documentation suggests to manually copy libraries in the server /lib directory from the source to the target server. As pointed out in Bobby's comment, older versions of this tool (such as v. 3.0.1) even tried to detect and copy user libraries from $GF_INST/glassfish/lib: the upgrade tool bundled with GlassFish 3.1 will not so that it's important to manually check this directory to manually migrate libraries form one server installation to another.

    The options accepted by the tool are the following:
    • -c|--console: Launches the tool in command-line mode.
    • -V|--version: Dumps the tool's version number.
    • -h|--help: Dumps the tool's arguments' help.
    • -s|--source: Specifies the source domain directory.
    • -t|--target: Specifies the target domain root-directory.
    • -f|--passwordfile: Specifies a password file with Glassfish master password.

    The quickest way to launch the tool is just:

    asupgrade

    to launch it in GUI mode, or

    asupgrade -c

    if you prefer using the CLI mode. In either mode, just supply the required parameters and proceed with the upgrade information.

    If you need to automate the upgrade procedures of multiple domains, perhaps you'd rather use the asupgrade tool in a script:

    asupgrade -c \
      -s /path/to/source/domain \
      -t /path/to/target/root/domain

    At the end of the upgrade procedure, you should check the logs to verify that no error was encountered during the process. If everything is ok, you can now start your new Glassfish v. 3.1 domain.

    Update Tool and pkg Tool

    The update tool and the pkg tool are, respectively, a GUI and a CLI utility, which are part of the Update Center Tools. Although very similar, they do not provide the same set of features to the user. Nevertheless, as far as it concerns the upgrade procedure, they can be used interchangeably and whether you want to use one or the other is just a matter of taste and opportunity.

    The biggest differences with the upgrade tool describe in the previous section is that these tools cannot be used to upgrade instances earlier than 3.0.1 and they do modify the server installation directory during the upgrade procedure. Should the procedure fail, although both the tools and the upgrade procedure are very resilient, the administrator could be left with a non-functional Glassfish installation. The biggest advantage is that it's a very quick upgrade path.

    Before upgrading, as usual, Glassfish domains should be stopped.

    To update tool can be launched with the:

    $GF_INST/glassfish/bin/updatetool

    command and its GUI will appear. On the main window, click the Available Updates control, select all items on the list and proceed with the upgrade procedure. The server will be upgraded to version 3.1 and, at the end of the procedure, the existing domains must be upgraded starting them with the --upgrade options:

    asadmin start-domain your-domain --upgrade

    Despite the name of the command, at the end of the upgrade the domain will be stopped. After upgrading all of your domains, you can start them normally.

    Upgrading using the pkg command will be familiar to long time (Open)Solaris users. Glassfish bundles a version of the pkg command to support IPS repositories. To upgrade Glassfish to the latest version is sufficient to launch the following command:

    $GF_INST/glassfish/bin/pkg image-update

    After the upgrade completes correctly, each Glassfish domain should also be upgraded as explained in the update tool case.

    Software Update Notifier

    The Software Update Notifier is yet another Update Center tool and, as such, it enables the administrator to perform upgrades under the same conditions explained in the previous section.

    The Software Update Notifier, if installed, periodically checks if updates are available and, when they are, it suggests the administrator to install them with a fancy notification balloon. When all domains are stopped, the upgrade process can be started accepting the notification in the balloon.

    After the upgrade completes correctly, each Glassfish domain should also be upgraded as explained in the update tool case.

    Friday, February 25, 2011

    Do not return in a finally block: return will swallow your exceptions

    This is a blog post that should not be. Unfortunately, this is a recurring mistake I'm seeing out there and hope this can help people prevent it.

    Yesterday I've been busy troubleshooting a problem we were experiencing in a cluster of application servers. We narrowed down the problem to a module implementing the JAAS login protocol and, according to the failing module documentation, we should have received a LoginException generating from an UnsupportedOperationException launched down the call stack. Instead, the module was silently failing and returning a reference to a partially constructed JAAS Subject.

    We checked out the module source code and it took us few minutes to detect the problem. The module was correctly implementing its documented protocol and was generating the UnsupportedOperationException as expected. The author, unfortunately, forgot one of the rules of the Java Language and was returning from a finally block, effectively discarding the exception that was launched just a couple of lines before.

    That's amazing how many times I see people returning from a finally block without understanding what the code is intended to do.

    Still not convinced?

    Run this:

    @Test
    public void hello() {
        try {
            throw new UnsupportedOperationException();
        } finally {
            return;
        }
    }

    No exception is thrown. Surprised? You should not. There are plenty of variations of this theme out there. Nevertheless, if you're still surprised that the hello() method completes without throwing an UnsupportedOperationException, please read on.


    try/catch/finally blocks

    According to the Java Language Specification, this is a simplified vision of what happens during the execution of a finally block (you can read the normative documentation here):
    • If the finally block completes normally, then the try statement completes abruptly for reason R.
    • If the finally block complete abruptly for reason S, then the try statement completes abruptly for reason S.
    As you may notice in the JLS, then, the "reason" a try statement completes with will always be affected by an abrupt completion of a finally block. Said in other words, the finally block will "decide" what happens later.

    You probably though many times about what happens if an exception is thrown in a finally block, don't you? Perhaps you even tested that case: the exception that will be propagated up the stack will be the exception launched in the finally block. That's not surprising, after all, and that's why you should carefully check that the code in finally blocks don't fail without control.

    So far, so good. But what happens, then, when a return statement is executed inside a finally block?


    The return statement

    What Java programmers often misunderstand is the very nature of a return statement. The Java Language Specification is clear about that:

    The return statement always completes abruptly.

    That's why exceptions launched in try or catch blocks (the "reason" of their abrupt completion) are simply ignored and not relaunched when a finally block completes abruptly with a return statement, being that the "resulting" reason of the abrupt completion of the try statement, as seen in the previous section.


    Lesson learned

    I personally don't see any good reason, unless you want any exception to be swallowed, to return in a finally block: it defies the very reason for try/catch blocks to exist.

    Do think twice (or more...) before using this construct: chances are you're doing a favor to others and yourself if you avoid it.

    Sunday, February 20, 2011

    Your Glassfish instance doesn't start. Have you checked the OSGi cache?

    I've seen this many times: you just upgraded your NetBeans and the bundled Glassfish and it suddenly stops responding. Or maybe you upgraded a Glassfish instance and your domain fails to start leaving you watching and endless stream of dots and no further activity:

    $ asadmin start-domain your-domain
    Waiting for DAS to start......................
    $ vmstat 1

     kthr      memory            page            disk          faults      cpu
     r b w   swap  free  re  mf pi po fr de sr s0 s1 -- --   in   sy   cs us sy id
     0 0 0 945484 1100116 222 58 947 7 7  0  0  0 12  0  0 1423 1516 1185  3  1 96
     0 0 0 838032 1383588 7  23  0  0  0  0  0  0  4  0  0  601  757 1085  0  1 99
     0 0 0 838032 1383588 4   4  0  0  0  0  0  0  1  0  0  598  782 1071  0  0 100
     0 0 0 838032 1383588 4   4  0  0  0  0  0  0  0  0  0  565 1079 1296  1  0 99

    If this is happening to you, chances are that the OSGi cache of your domain is poisoned and prevents your server from working properly. In this case, try to clean it up with ($GR is Glassfish installation root):

    $ rm -r $GR/glassfish/domains/your-domain/osgi-cache

    and start the server again.



    Upgrading Oracle Glassfish Server to v. 3.0.1

    Since version 3, Oracle Glassfish Server, formerly known as Sun Glassfish Enterprise Server, bundles an IPS-based "update tool" that makes life easier to system administrators while upgrading their Glassfish instances.

    The update tool, a standalone GUI application, has been integrated into the Glassfish Admin Console since version 3.0.1, and can now be easily used to check installed components or available package updates.



    The update tool can be found at $GLASSFISH_ROOT/updatetool/bin/updatetool or, starting with version 3.0.1, at $GLASSFISH_ROOT/glassfishv3/bin/updatetool.

    Sometimes, though, administrators prefer using a safer upgrade path than patching a running instance, in case they need to quickly rollback to the previous instance in case something goes wrong.

    As described in the official Glassfish documentation, administrators can now use the asupgrade tool to upgrade an existing Glassfish domain to a new installation path so that, if the upgraded instance does not work properly, they can fall-back to the old one.

    Upgrading Glassfish using the asupgrade tool is straightforward:

    • Install the Glassfish instance you're going to upgrade to. If you're using the zip distribution, just unzip it.
    • Stop the domain to be upgraded.
    • Launch the asupgrade tool to upgrade a Glassfish domain:
    asupgrade [-c] \
      --source /old/domain/path
      --target /new/domain/root/path

    The optional -c option instructs asupgrade to use the upgrade command line utility instead of the default GUI upgrade utility. This is handy when you need to upgrade instances on remote machine and you cannot, or wish not, use a GUI interface.

    Please note that the --source argument points to a domain while the --target argument points to a domain root. You should, then, use something like this:

    asupgrade -c \
      --source /opt/ogs-3/glassfish/domains/domain1
      --target /opt/ogs-3.0.1/glassfish/domains

    Depending on your domain name, the tool may ask you whether you wish to rename an existing domain on the new server instance. That's typically the case when you're upgrading a domain called domain1 since the default Glassfish installation bundles a domain with such a name.

    The tool will start the upgrade process that will be logged in the upgrade.log file for later inspection. If the upgrade process finishes correctly, you can now start the upgraded instance and check that all of your applications work as expected.

    So far, I haven't experienced any major problem with the asupgrade tool and it's a very easy upgrade process that takes care of everything. Resources such as data sources, connection pools, JavaMail sessions, were all replicated correctly on the destination server. The tool even took care of copying the database drivers from one instance to another.

    The only glitch I experienced is being unable to start the server without cleaning up the OSGi cache first but that's easily solved.

    Monday, November 8, 2010

    Glassfish 3.0.x Admin Console Not Starting: Is It Behind a Proxy?

    Today I was performing yet another Glassfish v. 3.0.1 installation, one of the easiest pieces of software to set up there, on a Solaris 10 system:
    • Install the software (unzipping a zip or installing a native package).
    • Optionally create a new domain.
    • Start the domain.

    To my surprise, when I tried to connect to the Admin Console, the browser got stuck just after inserting my user credentials. I tried to restart the domain over and over again but I had no luck.

    The domain logs (that you can find in $YOUR_DOMAIN/logs/server.log) were telling nothing interesting: the domain was starting up correctly and the admin console application was logging no errors. The last line of the logs always was:

    [#|[snip]|admin console: initSessionAttributes()|#]

    After a while, it appeared a line such as this:

    [#|[snip]|Cannot refresh Catalog: Connection timed out: Connect|#]

    This was the clue! Glassfish is able to check for updates automatically from an IPS repository: do you remember that fancy update icon in the upper left corner of the Admin Console that shows up to suggest you the available updates? This server is behind a proxy: maybe I was experiencing a glitch related to this.

    A quick search indeed revealed that other users were experiencing the same problem when Glassfish sat behind a proxy server. The workarounds I tested to work are the following:
    • Have Glassfish use your proxy.
    • Use Glassfish updatetool to disable automatic updates (which is something I would always suggest for a production environment).
    • Remove the console-updatecenter-plugin.jar.

    The third suggestion comes from a thread published in the java.net Forums. Unfortunately the link is broken now and I could only read it using Google's cache.

    Have Glassfish Use Your Proxy

    To setup Glassfish to use a proxy you can use the following Java system properties:
    • http.proxyHost
    • http.proxyPort
    • http.proxyUser
    • http.proxyPassword

    You can either use the asadmin program or the admin console to do that. Using asadmin is as simple as:

    $ asadmin
    asadmin> create-jvm-options "-Dhttp.proxyXXX=value"
    asadmin> create-jvm-options ...

    If you prefer using the Admin Console, just navigate to Enterprise Server/System Properties and use the web interface to add the values.

    The values you set will be reflected in the domain.xml domain configuration file:

    [...snip...]
    <java-config ...>
    [...snip...]
      <jvm-options>-Dhttp.XXX=value</jvm-options>
    </java-config>
    [...snip...]

    Use Glassfish updatetool To Disable Automatic Updates

    You can use the updatetool program to update Glassfish and configure the autoupdate feature:

    $ $GLASSFISH_INST/bin/updatetool

    The first time you launch updatetool, it will ask you to install this feature. Since you're behind a proxy, you need to setup some environment variables. If you're using an HTTP proxy, you can just set the http_proxy variable:

    $ export http_proxy=http://user:password@proxy.host:port

    When updatetool finished installing the required packages, you can start it again and the updatetool windows will show up. In the Preferences window you can tune the update behaviour or disable it at all.

    Remove the console-updatecenter-plugin.jar

    The last thing you can do is removing the guilty plugin. The Glassfish plugins are deployed in the $GLASSFISH_INST/glassfish/modules directory. You can just move the console-updatecenter-plugin.jar to console-updatecenter-plugin.jar.old and Glassfish won't use it.


    "No Network Access" for the Admin Console


    As pointed out in a comment, Glassfish can be configured to have "no network access" by setting the

    com.sun.enterprise.tools.admingui.NO_NETWORK

    property to true. As usual, this can be done adding the

    -Dcom.sun.enterprise.tools.admingui.NO_NETWORK=true

    to the JVM parameters list or using the asadmin tool:

    $ asadmin create-jvm-options \
      -Dcom.sun.enterprise.tools.admingui.NO_NETWORK=true

    Conclusion

    This is a bothering bug with an easy workaround. Looking forward to Glassfish v. 3.1.

    Sunday, October 3, 2010

    Some Reasons Why Solaris Is a Great Java Development Platform

    Some days ago I posted "The Death of OpenSolaris: Choosing an OS for a Java Developer" in which I stated that Solaris is a great platform for a Java developer. The point of that post was simply wondering about which Solaris version I'd use since the demise of OpenSolaris. What the post did fail in clarifying, as Neil's comment made me realize, were the reasons why you should choose Solaris as your development platforms. I decided to write this follow up to that post to quickly summarize my favorite ones introducing some use cases where such technologies come in handy.

    Software Availability

    Although Solaris continues to be a niche OS (such as many other platforms are, anyway) in the last few years Sun and the community made an excellent job at promoting it as a desktop alternative for developers. There existed even a specific distribution for developers: Solaris Express Developer Edition. It was discontinued and there really is no need for it nowadays, anyway. Late Solaris distributions (such as SXCE, OpenSolaris, OpenIndiana), include (either bundled or in the official package repository):
    • Data bases (MySQL, PostgreSQL).
    • Web Servers (Apache, Java Enterprise System Web Server, etc.).
    • Application servers (Glassfish).
    • The SAMP stack (Solaris + Apache + MySQL + PHP).
    • IDEs (NetBeans, Eclipse).
    • Support for other popular languages (Ruby, Groovy, etc.).
    • Identity management (LDAP, Java Enterprise System Identity Server).

    Solaris also is a platform of choice in the enterprise hence common enterprise software packages are supported and you, as a Java developer or Java architect, won't miss the pieces you need to build your development environment. The very basic software packages I often need as a Java developers are:
    • Oracle RDBMS.
    • Oracle WebLogic Application Server.
    • IBM WebSphere Application Server.
    • JBoss Application Server.

    Solaris' Technologies

    Solaris has got some unique technologies that other UNIX (and UNIX-like) systems that might be used as development platforms are lacking (or ported from Solaris.) What's important here is not "technologies on their own" or technologies that are helpful only in big enterprise environments, but the fact that:
    • They're pretty well integrated in Solaris and are built to take advantage of each other.
    • There are common use cases in which these technologies are really helpful to a developer.

    Each one of them would deserve several posts on their own, however, I'll try to make some concise examples.

    Solaris Service Management Facility

    Although this technology is probably most useful to a system administrator, as a developer I often took advantage of it. SMF is a framework that provides a unified model for services and services management. The basic recipe only needs an XML descriptor for a service. SMF lets you:
    • Define a service: startup scripts location, parameters and semantics.
    • Establish dependencies between services:
      • Services and service instances may depend on other service instances.
      • Service startup is preformed in parallel respecting service dependencies.
    • Enhanced security and fine-grained role based access control:
      • A service can be assigned only the minimum required set of privileges it needs to run.
      • Service management can be delegated to non-root users using Solaris RBAC (Role-Based Access Control).
    • Service health control:
      • Service auto-restarts.
      • Service health is enhanced by cooperation with Solaris Fault Manager which prevents service degradation when hardware failures occur.
    • Automatic inetd Services Wrapper: SMF automatically wraps inetd services.

    A Typical Use Case

    Every software package I use has its own SMF descriptor (either provided with the package or defined by me) and it dramatically reduces the time I need to set up a development machine. In the case of WebSphere Application Server, for example, I have separate service instances for:
    • WebSphere IHS.
    • WebSphere Application Server.
    • WebSphere Application Server DMGR.
    • WebSphere Application Server cluster nodes.

    Dependencies are defined between them and I can startup the required WebSphere services with just a line of code:

    svcadm enable [websphere-service-name]

    and SMF will take care of everything.

    The usage pattern for SMF can be enhanced further. Let's suppose you're working in one or more projects and each one of them requires distinct set of running services. What usually happens is one of the following:
    • You install them all and let them run.
    • You install them all and start and stop them manually when you switch working project.

    Resources are always few for a developers and some are paranoid about sparing them. With SMF you can:
    • Define a SMF service for each of your projects.
    • For every projects, define dependencies with the services you need.

    This way, at a minimum, you can start and shutdown, with a single command, every service you need for a specific project. No more:
    • Custom shell scripts for every service.
    • Custom configuration entries for inetd services (such as Subversion, Apache, etc.)
    • Specific OS customization.
    • Running services when you don't need them and waste resources you could use otherwise.

    Example of SMF service manifest customization can be found in the following posts:

    ZFS

    The ZFS filesystem is unique as far as it concerns flexibility and ease of use. With an incredibly lean set of commands, you can:
    • Create file systems on the fly.
    • Snapshot file systems on the fly.
    • Clone file system on the fly with almost null space usage overhead.

    There's a huge literature about ZFS and I'll limit to describe my favorite use cases.

    Use case: Multiplexing Your Development Environment.

    Software installations are just the beginning of your user experience. Often, we spend time:
    • Configuring our environments.
    • Fine-tuning them.
    • Defining the set of additional libraries we need.
    • Defining the set of server resources (JDBC, JMS, etc.) our applications use.

    And so on. The list is endless.

    Sometimes it's necessary to prepare different environments for different projects or different development stages of the same application. Instead of losing time and resource to build different environments I'll usually proceed as follows:
    • Install and configure my environment.
    • Make a ZFS snapshot of it.
    • Make a ZFS clone of it for every additional setup I need.

    Oracle JDeveloper is a good example of an application I often clone. JDeveloper is fundamentally a single user environment, despite adopting the common approach of using a per-user configuration directory in the user's home directory. Instead of fiddling with scripts to set per-user configuration parameters, I just install it once, snapshot it's installation directory and make a ZFS clone, one per environment. I use several clones of the JDeveloper environment myself, in my user home directory.

    The power of ZFS clones can be used by the Zones infrastructure, as we'll see in the following section, thus enhancing further its power. Cloning a ZFS filesystem is also advantageous while dealing with big installations such as disk images of your favorite virtualization technologies.

    Additional posts I wrote about ZFS that could clarify some of its use cases are:

    Containers and Other Virtualization Technologies

    I consider Solaris a superior desktop virtualization platform. Once again, with a couple of commands. you can easily create a paravirtualized Solaris instance (a Zone). The zones infrastructure is ZFS-aware and can take advantage of it.

    Zones can be configured with a command line interface to its XML configuration file. Creating a zone is straightforward and, since they're a lightweight technology, you can create as much zones as you need. If you're using ZFS, the process of cloning a zone is incredibly simple and fast.

    Use Case: Clustering an Application Server

    During the development of your Java EE application you will tipically need an instance of one (or more) of the following:
    • An application server.
    • A web server.
    • A data base;
    • An user registry.

    It's also desirable to have them running on isolated environments so that you can mimic the expected production configuration. With zones it's easy: just create as many zones as you need and each one of them will behave as a separate Solaris instance: every zone will have, for example:
    • Its own network card(s) and IP configuration.
    • Its own users, groups, roles and security policies.
    • Its own services.

    Instead of installing and configuring an environment multiple times, you will prepare "master" zones with the services you need. I've got a "master" zone for every one of the following:
    • WebSphere Application Server.
    • WebLogic Application Server.
    • Oracle DB.
    • MySQL DB.
    • LDAP directory.

    and so forth. With one simple command (zoneadm clone [-m copy] [-s zfs_snapshot] source_zone) you'll end up with a brand new working environment in a question of minutes.

    Use Case: VirtualBox and ZFS

    Sometimes you'll rather work on a virtualized instance of some other OS, such as GNU/Linux, FreeBSD and Windows. Solaris is a great VirtualBox host and the power of ZFS will let you:
    • Create "master" images for every OS or every "OS role" you need.
    • Clone them on the fly to create a brand new virtual OS image.

    In my case, I've got:
    • A master Windows 7 client with Visual Studio for .NET development.
    • A master Windows Server 2008.
    • A master Windows Server 2008 (a clone of the previous one) with SQL Server 2008.
    • A master Debian GNU/Linux.

    Every time I need a new instance I just have to clone the disk image. In a matter of seconds I've got the environment I need. Not only I'm sparing precious time, I'm also sparing a vast amount of disk space. Should I store all of the images (and zones) I use without the ZFS technology and I'd need at least 4 times as much disks as I've got.

    Use Case: A Virtualized Network Stack

    Solaris provides you pretty powerful network virtualization capabilities. You can, for example, create as many virtual NICs as you need and use them independently either in Solaris Zones or as network cards for other virtualization technologies (such as VirtualBox.) Network cards can be interconnected with virtual switch (etherstubs) and enable you to create "networks in a box." Not only you can use virtualized instances to mimic your production environment: you'll be able to create a virtualized network to emulate the complex network policies your environment could need.

    If you need to test an environment whose configuration would be impossible to replicate without additional physical machines, that's where virtualization technologies (such as Zones or VirtualBox) and the virtualized network stack come in handy. My developer environment for a project I'm working for is made up of:
    • Two zones with two load balanced IBM IHS instances.
    • A zone with an LDAP directory.
    • Two zones with two clustered instances of IBM WebSphere Application Server.
    • A Zone with an instance of IBM WebSphere DMGR.

    With Solaris, I can replicate the production environment in my box and respect each and every network configuration we use. Without these technologies, it would have been much harder to accomplish this goal or I would end up with custom configurations (for example, to avoid port clashes). In all cases, I'd lose much more time on the administration and configuration of such environments if zones weren't so easy to use.

    DTrace

    DTrace power is extremely easy to explain to a developer. At the same time, it's difficult to grasp its usefulness without trying it yourself. DTrace on Solaris provides tens of thousand of probes out of the box and others can be created on the fly. This "probes" provide you an extremely powerful mean of troubleshoot problems in either your applications and the underlying operating systems. To use the probes you've got to use scripts written in the D language. Fortunately, this language is pretty easy by design and you can write powerful D scripts in a few lines of code.

    DTrace is unobtrusive and let you troubleshoot problems immediately, without modifying your application, even in a production environment. Some IDEs, such as NetBeans, have powerful plugins that let you write D scripts and see the data collected by the probes in beautiful graphics.

    As a developer, I valued DTrace usefulness more than once. Instead of troubleshooting problems having to dig into the source code and introduce additional code (even in the cases in which aspects come in handy), I could use a D script to observe the application from the outside and quickly collect data that could help me determine where the problem could be.

    In some cases, moreover, you could find yourself dealing with situations in which there's no code available. I could quickly troubleshoot a problem I was having with WebSphere Application Server with a D script instead of relying on WebSphere tracing facilities and the task of interpreting log files.

    Conclusion

    So much for an introductory post. The possibility of building a development environment as close as possible as your target environment is a "must" for any development platform. Additionally, I consider that working on a environment as close as possible as the production environment not only gives you additional value and insights during an application development stage, but should also considered a mandatory requirement for every project we're involved into. Solaris provides all of the tools a developer need to accomplish this goal.

    Solaris is a complex enterprise operating system with many features you won't probably ever use. Nevertheless, there's a use case for many others of them, as I tried to point out in this post. Since some of these technologies were developed with an open source license, they are also available on other operating systems: ZFS is available on FreeBSD and there exist a community effort to port it to OS X; DTrace is available on OS X, Linux and FreeBSD.

    The "Solaris advantage" is that all of these technologies are highly integrated and take advantage of each other. The result is worth more than the sum of them. These technologies have got a very polished and easy to use administrative interfaces: when time is important, "How you do it" is fundamental.

    I hope that these insights might help you understand if and when the Solaris operating system might be useful to you. Even if you consider that it's not, I suggest you give it a try anyway: it's always good to add new technologies to your tool box.

    Wednesday, September 29, 2010

    The Death of OpenSolaris: Choosing an OS for a Java Developer

    A Bit of History: The Struggles of OpenSolaris

    This is no news: you probably know all about it.

    As a long time Solaris user, the recent years have been full of good news for me.

    I remember starting with GNU/Linux at home to have "something similar" to the Solaris workstations I used at work. It was the time when software would most likely compile on Solaris rather than on Linux.

    Years later I bought my first Sun Workstation: it was the time when trying to compile on Solaris packages that would supposedly compile on a POSIX system was a pain. Still, I continued to regard Solaris as a stable platform to keep on using it for my work duties, such as Java programming.

    Then came Solaris 10 and all of its wonderful technologies such as ZFS, Zones and DTrace, just to cite a few. With it, there came the Solaris Express distributions which, at last, filled a long standing gap between Solaris and other operating systems, providing us a pretty modern desktop environment.

    In late 2008 came the first OpenSolaris distribution. I installed it, played with it, but kept on using SXCE for my workstations. The main reason was compatibility with many Sun packages, such as the Java Enterprise System or the Sun Ray Server Software, that had more than one glitch on OpenSolaris.

    When SXCE was discontinued, I waited for the 2010.xx OpenSolaris release to upgrade my systems. Unfortunately, that release will never see the light.

    The Oracle Leaked Memo (the dignifying uppercase is a must given Oracle prolonged silence over the subject) shed a light over Oracle plans for Solaris proper and OpenSolaris. Part of the "good news" is that the Solaris Express program has been resurrected and the first binary distribution is expected later this year.

    The bad news is that the code, at least the parts of it that will be released with an open source license, will be released after the releases of the full Solaris Operating Systems. Basically, our privileged observation point over the development of the operating system has been shut down.

    Lots of ink has been been spilled since the Leaked Memo and plenty of information, discussions and wars are going on in the blogosphere. I'm not an authoritative source to speak about the subject and it's not even clear to me what I'm going to do, now.

    Benefits of Solaris for a Java Developer

    Solaris has been my operating system of choice since before I started working in the IT industry. As a student, I grew up with Solaris at the data center of my University and the Slackware I used at home seemed like a kid toy, compared to it. After graduating, I started working as a design engineer for a leading microprocessors producer. Needless to say, Solaris was the platform we ran our design software upon. Then, I moved to a consulting firm and started my career as a Java architect.

    Solaris was and is the platform of choice for most of the clients I've been working for. Even today, the application servers, the cluster software, the database, and most of the infrastructure used by my clients run on Solaris. It always seemed a sound choice to me, then, developing software on the same platform that will run it in production.

    IDEs, Tools and Runtimes

    As a Java developer, I can run all of my tools I need on a supported platform. My favorite IDEs (NetBeans and JDeveloper), the application servers my clients use (WebLogic and WebSphere, mostly), the databases used by my applications (MySQL, Oracle RDBMS, PostgreSQL): all of them run and are supported on Solaris. Some of them are even bundled with it or readily available by Sun sponsored package repositories. The Eclipse platform, to cite a widely use IDE for Java, is available in the OpenSolaris IPS repository, too.

    Solaris Technologies

    Solaris 10 also integrates DTrace, a powerful, unobtrusive framework that allows you to observe and troubleshoot application and OS problem in real time, even in production systems with an almost null overhead. DTrace has allowed us to diagnose strange production quirks with no downtime: once you've tried DTrace and the D language, there's no going back to "just" a debugger, even in the development stages of your project.

    Other kinds of problems does not show up in your debugger or are extremely hard to catch. It might be the case of network or file systems problems. That's where DTrace comes in handy: it lets you observe with an incredibly high detail what's going on in your application and in the kernel of the operating systems, if it's necessary to dig so deep.

    Solaris Virtualization Technologies

    Solaris is also an ideal host virtualization platform. Solaris can "virtualize itself" with Containers, Zones and Logical Domains: you can start a Zone in no time (and almost no space overhead), assign a resource cap to it and also build a virtualized network in a box to simulate a complex network environment.

    One of the problems that I encountered during the development of big enterprise system is that the development environment, and sometimes even the integration environment, is very different than the production one. It surely is a methodology problem: nevertheless, developers have few weapons to counteract. For example, applications that appear to run fine on a single node may not run correctly on a server cluster, or scale badly.

    The more you wait to catch a bug, the more impact will have a patch for it. That's why in my development team, for example, we use Solaris Zones to simulate a network cluster of IBM WebSphere Application Servers and a DB cluster. All of them run in completely isolated Zones in one physical machine and communicate on a virtual network with virtual etherstubs (sort of a network switch), VLANs and routing rules. This environment lets us simulate exactly how the software will behave in the production system. Without a flexible and lightweight virtualization technology it would have been much more difficult and expensive to prepare a similar environment.

    And if you (still) need to run other platforms, you can use Xen or VirtualBox to run, for example, your favorite Linux distro, Windows, or *BSD.

    Summarizing

    Enumerating the advantages of Solaris is difficult in such a reduced space, however I'll try:
    • It's a commercially supported operating system: that's an option, since Solaris is free for development purpose. Nonetheless, it's an important point to take into account.
    • Is (very) well documented: there's plenty of official and unofficial documentation.
    • It's easy to administer: Solaris is pretty easy to administer, even if you're not a seasoned system administrator.
    • It's an UNIX system: with all of its bells and whistles.
    • It is a great virtualization platform.
    • It has some unique technologies that add much value to its offering, such as ZFS and DTrace.


    If you're a Java developer and haven't given Solaris I try, I strongly suggest you do it. Maybe you'll start to benefit from other Solaris 10 technologies such as Zones and ZFS, even for running your home file or media server.

    Complaints

    I often hear complaints about Solaris coming from different sources and with the most imaginative arguments: proprietary, closed, old, difficult to use. I usually answer inviting users to try it and see for themselves before judging it (if that's the case). Most of the times I'm not surprised to discover that the complaining guy had minimal or null exposure to Solaris.

    Also, I'd like to point out that no OS I tried is a swiss army knife. Solaris is a server-oriented OS with a good desktop but it's not comparable with other operating systems for such an use. So: no comparison with Linux, please. It would be so unjust as comparing Linux and Mac OS X for the average home user. ;)

    Alternatives

    Since Java "runs anywhere", there's plenty of choice for a Java developer.

    Since I own a laptop with Mac OS X, I've built a small development environment with all of the tools I need. Mac OS X is a great operating systems that comes with many fancy features out of the box and, although it has some idiosyncrasy with Java (read: you have to use the JVM shipped by Apple), it's a good OS for a Java developer. Since the Mac OS X hype has begun, there's plenty of packages for it and a big ecosystem which is still growing. Still, many software packages run in the enterprise aren't supported on Mac OS X. Since I prefer to have an environment as close as possible as the production one, I think that OS X is not the best choice for the average Java EE architect.

    I've also been an hardcore Slackware and Debian user for a long time. An enterprise Java developer would miss nothing in a modern GNU/Linux distribution, nowadays, and most of the software packages you'll find in the enterprise will run on your GNU/Linux distribution.

    No need to talk about Windows, either.

    So, why Solaris? Every OS has its own advantages and disadvantages. The point is to just recognize them. Mac OS X, in my opinion, is the best OS for a home user. I would change it for no Windows and no Linux. But as far as it concerns my developers' duties, every other OS just lacks the features and the stability that make Solaris great. ZFS, DTrace and Zones, for my use cases, are killer features.

    What's Next?

    You've decided to give Solaris a try, so: which is Your distribution? I don't know.

    Solaris Express/Oracle Solaris

    I strongly suspect that my wait will be prolonged and I will finally upgrade my machines as soon as Solaris Express has been released. Upgrading to Solaris 10 09/10 is not possible since I'm using some ZFS pools whose version is not yet supported by Solaris proper but it is a sound choice for a starter.

    The advantage I see in using one of these versions is the availability of optional support and the good level of integration with the most commonly used software packages that Oracle is likely to guarantee.

    OpenIndiana

    You should also know that OpenSolaris sources have been (sort-of) forked and two new projects are born: Illumos and OpenIndiana. The project were started by Nexenta employees and volunteers of the OpenSolaris community. The first projects aims at maintaining the OpenSolaris code and the parts of the code that are closed or code that upstream might choose not to maintain. The OpenIndiana project aims at producing binary distribution of the operating system built upon the Illumos source code. OpenIndiana will provide a really open source, binary compatible alternative to Solaris and Solaris Express.

    Sounds good and I'll willingly support it. In the meantime I've installed OpenIndiana in a couple of virtual machines and the first impressions are very good. I suppose it hasn't passed enough time yet for diverging changes to have emerged.

    If you prefer a more modern desktop with a recent Gnome interface, drop Solaris 10 and go for OpenIndiana, if you don't feel like waiting for Solaris Express. In any case, switching between the two shouldn't pose any problems. What's clear to me is that I won't consider using both operating systems: I'll have to make a choice.

    Support Availability

    As an enterprise user and a Java developer, I've always been more concerned about OS support and support for the packages I use, rather than about eye candy. Even at the cost of running a proprietary platform.

    In conclusion: I'll wait for Solaris Express to be released and only then will decide which one I'll use for my purposes between Oracle Solaris Express and  OpenIndiana. My heart is betting for OpenIndiana. My brain is betting for Oracle Solaris Express and Solaris proper. Only time will tell which one is right (for me.)

    Follow-Up

    A follow-up of this blog post is avaible at this address. In this post I'll try to summarize some use cases in which the technology we introduced in this post are effective and add real value to your development duties.

    I hope you enjoy it.



    Tuesday, September 28, 2010

    EJB 3.1 Global JNDI Access

    Table of Contents


    As outlined in the previous parts of this series, the major drawback of the EJB v. 3.0 Specification was the lack of portable global JNDI names. This implies that there's no portable way to link EJB references to a bean outside your application.

    The EJB v. 3.1 Specification fills this gap defining, in its own words:

    "a standardized global JNDI namespace and a series of related namespaces that map to the various scopes of a Java EE application."

    This blog post will give you an overview of the Global JNDI Access as defined by the EJB v. 3.1 Specification.

    Namespaces and Scopes

    The EJB v. 3.1 Specification defines three distinct namespaces with its own scopes:
    • Global.
    • Application.
    • Module.

    The specification requires compliant containers to register all of the session beans with the required JNDI names. Such standardized names are portable and your application components will be able to establish a reference to an EJB using a name that is portable across application servers.

    Global

    Names in the global namespace will be accessible to code in any application and conform to the following syntax:

    java:global[/<app-name>]/<module-name>/<bean-name>[!<interface-FQN>]

    <app-name> is the name of the Java EE application as specified in its standard deployment descriptor (application.xml) or, by default, the base name of the deployed EAR archive. This path fragment is used only if the session EJB is deployed within a Java EE application EAR file.

    If the session EJB is deployed in an EAR file, its <module-name> is the path name of the Java EE module that contains the bean (without extension) inside the EAR file. If the session bean is deployed as a standalone Java EE component in a JAR file or as part of a Java EE web module in a WAR file (which is now allowed by the Java EE 6 Specification), the <module-name> is the name of the archive (without extension). The <module-name> value can be overridden by the <module-name/> element of the component's standard deployment descriptor (ejb-jar.xml or web.xml).

    The <bean-name> is the EJB name as specified by the mechanisms described in the previous parts of this blog post.

    The <interface-FQN> part is the fully qualified name of the EJB business interface.

    The container has to register one JNDI global entry for every local and remote business interface implemented by the EJB and its no-interface view.

    Session EJB With One Business Interface or a No-Interface View

    If an EJB implements only one business interface or only has a no-interface view, the container is also required to register such a view with the following JNDI name:

    java:global[/<app-name>]/<module-name>/<bean-name>

    Application

    Names in the application namespace will be accessible only to code within the same application and conform to the following syntax:

    java:app/<module-name>/<bean-name>[!<interface-FQN>]

    Each path fragment retains the same meaning described for the global namespace JNDI names syntax in the previous section.

    The same publishing rules for a compliant container described in the previous section apply.

    Module

    Names in the module namespace will be accessible only to code within the same module and conform to the following syntax:

    java:module/<bean-name>[!<interface-FQN>]

    Once more, each path fragment retains the same meaning described for the global namespace JNDI names.

    The same publishing rules for a compliant container described in the global namespace section.

    Local Clients

    It may be important to notice that, although global JNDI names for local interfaces (and no-interface views) are published, this does not imply that such an interface will be accessible to components running in another JVM.

    Conclusions

    The EJB v. 3.1 Specification, and other specifications in the Java EE 6 Platform, brings simplicity and adds many new features and tools to the developers' toolboxes. "Global JNDI names" is an outstanding, although simple, feature because it finally fills a long-standing portability limitation of the previous revisions of this specification and, hence, of the entire Java EE platform.

    EJB 3.0 and EJB 3.1 provide a powerful, portable, yet simple component model to build enterprise applications. The "EJB sucks" day are gone but only time will tell if this technology will regain the trust of Us, The Developers.

    As far as it concerns myself, I feel pretty comfortable with Java EE 6, EJBs, CDI beans, the good support I have from IDEs such as NetBeans or JDeveloper (although the latter does not support EJB 3.1 yet), and all the specifications that build up this venerable platform.