Tycho and pre-p2 update sites

(from "Simplifying The p2 Process, Part 3: Associate Sites")
Even as p2 has taken over the world domination when it comes to Eclipse update sites, you can still find old-style update site around. This really doesn't matter as long as you want to install features in Eclipse itself - it just takes longer, as all features of the update site is downloaded - but if you want to use these old-style update sites in a Tycho build, then you have a problem as Tycho only support p2 based repositories, not the old-style update sites.

During the development the next version of our AGETOR product, we recently had this exact problem with the Elver update site, which contains the needed plug-ins if you want to work with Teneo. In this post, I'll try to describe how you can solve the problem, and likely also get a faster and more stable module resolution with your Tycho build.


Sources of Eclipse Related Information

Eclipse can sometimes seem to be a very big and difficult animal to master or just understand. Most of all when you want to implement your first new functionality in Eclipse.

One of the question I have tried to answer on many occasions, is where you can find good and relevant sources of information on Eclipse technologies. As it happens there are many sources of information on Eclipse technology and below I have tried to list those I use or would recommend. If you feel any important sources are left out, please feel free to comment below and I'll try to update this post...


Tycho Test Trouble - Expectations and Realities

As part of my job, I have converted some of our Eclipse based product to use the Tycho build system instead of PDE Build and various internal build tools. For the most parts, this have been a pleasant experience, where the most trouble have been on the tests and getting these to work in the new environment. At EclipseCon Europe 2012, I talked about this in the presentation "Beware: Testing RCP Applications in Tycho can cause Serious Harm to your Brain".
One of the things, I really like about Tycho, is the fact that when a build fails, you can try to re-build the failed module by just running Tycho in the failed module and expect to get the same results as if you started the build again from the top parent POM.

But now I have a situation where this is not the case: when I run the (global) build from the top parent POM, the build fails consistently, whereas the "local" build in the failing module succeeds consistently! My expectations does not match the realities! Which is a particularly bad thing when it comes build systems as we really want to rely on them and not think too much about them in our daily work.

It has been very hard to find the reason behind this behavior, thus this blog entry in the hope that others will not have to go though the same trouble.

A little Background

The birds view of the operation of Tycho - and Maven - is rather simple: first it reads the build information of all the modules in the build configuration, then it reorders them to satisfy the dependencies between them and lastly it builds the modules in sequence (see Maven documentation for the details). Tycho automatically adds the extra dependencies from the OSGi/Eclipse related build files such as MANIFEST.MF, feature.xml, categories.xml, etc.

When you run your test plug-ins, you sometimes want to add additional plug-ins to the launch configuration - dependencies that cannot be determined by Tycho from the usual build configuration files. This can be for many different reasons: optional dependencies, RAP versus RCP differences, use of OSGi Declarative Services, use of Equinox Extension Registry, use of Update Sites, use of JSR 223 and Buddy Class Loading just to name a few. These extra dependencies can be declared relatively easy in pom.xml in the surefire section as described on the Tycho Wiki.

        <!-- RAP -->
        <!-- Groovy support -->
        <!-- Logging via DS -->

You can have dependencies on both bundles and features - the later can be very useful in cases where you have fragments that depends on the environment!

If the build fails - usually because a test fails - Tycho stops and ignores the rest of the modules in the build sequence. At this point you usually fix the problem and then either re-run the complete build or try to re-build only the affected modules (sometimes a little dangerous, but often much faster).

Like most other developers, I always run my top partent POM build with the option -Dtycho.localArtifacts=ignore (see the Tycho Wiki for the details). This ensures that only controlled artifacts are used in the build product: the artifacts must come from the target platform, from the Maven repositories or be the result of other modules in the build reactor. Thus any artifacts from previously builds are simply ignored and cannot sneak into the product. Of cause, when you have to (re-)build a single module, you have to leave out this options.

The Problem

Which brings me to the problem we experienced yesterday.

Yesterday, two things happened: I added a new test plug-in (com.agetor.core.tests) to the application... and suddenly the build failed consistently. I have added many test plug-ins to the build before and this plug-in was very similar to all the test plug-ins, I have added before. I just wanted to test some very basic utility functions that has been left untested before - and thus the new module was added near to the top of the parent POM along with the base module that was tested.




When I tried to re-run the build on just the (new) failed test plug-in, it consistently succeeded!

The error messages from the failed tests seemed to indicate that some OSGi Declarative Services had not been properly started because some classes was missing - ClassNotFoundException - but when I used the ss command of the OSGi console to look at the started bundles, all bundles had been started as expected.

Then I tried to dissect the two OSGi configurations build by Tycho - one for the failing build and one for the succeeding build. Here I noticed a peculiar difference: config.ini for the failing build contained references to the plug-in folders of some of the used plug-ins rather than the jar files for these:


The first line is correct, the last line is not! (The later line could have made some sort of sense, if only dev.properties had included the appropriate line, but... not for a Tycho build!) In the succeeding build the references above was directly to my Maven repository.

This was rather weird! But it did explain the problem with the missing classes: OSGi would find the MANIFEST.MF files just fine, but not the class files as these are not located in the root of the plug-in.

I spent the next hour or two checking all the various files in this test plug-in against an older working test plug-in, yet finding nothing that could account for any of this.

We use OSGi Declarative Services as well as a number of fragments in the product and thus we have added a number of surefire dependencies for these directly in pom.xml (as shown above). And on reflection, it was almost the same set of plug-ins that had been added to the POM that was not correct in config.ini. Until yesterday this worked fine. Weird indeed!

(At this point I had tried to compile with various different versions of Tycho, but 0.16.0, 0.17.0-SNAPSHOT and the newly staged 0.17.0 all had the same behavior..)

It was at this point, I noticed that the build order for the modules in the product, was a little strange!

[INFO] Reactor Build Order:
[INFO] com.agetor.parent
[INFO] com.agetor.target
[INFO] com.agetor.core.parent
[INFO] com.agetor.core
[INFO] com.agetor.core.logging
[INFO] com.agetor.test.utils
[INFO] com.agetor.test.utils.rap
[INFO] com.agetor.core.tests
[INFO] com.agetor.core.utils.pde
[INFO] com.agetor.core.utils
[INFO] com.agetor.server.parent
[INFO] com.agetor.server
[INFO] com.agetor.core.utils.tests
[INFO] com.agetor.core.utils.tests.fragment
[INFO] ....

The test plug-in - com.agetor.core.utils.tests - had a pom.xml dependency on com.agetor.core.logging.impl (shown above), and thus I expected the later to appear in the build order before the test plug-in. But it did not! Until this point, I had always assumed (or expected) that these extra surefire dependencies was taken into considerations in the build sequence in the same manner that the dependencies from MANIFEST.MF, feature.xml, etc...

Now I had something to google, and searching for "tycho surefire dependency reactor" returned a relatively new bugzilla issue by Tobias Oberlies: Dependencies configured in tycho-surefire-plugin don't affect build order. Bingo!

The Solution

The immediate solution to the problem was simple, as Tobias was so kind to include two possible work arounds as well in the bug report:
Workaround: change the module order in the root POM to bundle-1, bundle-3, bundle-2, or add Require-Bundles from the test bundle to the other two.
I have re-ordered the modules in the parent POM to reflect dependencies a little closer and also added a few explicit dependencies in the MANIFEST.MF of the test plug-ins where the former didn't work.


Eclipse and command line arguments

If you ever wanted to run an Eclipse based application on one machine and debug it from another machine, you have probably run into a rather annoying problem: if you specify -vmargs on the command line, you must (re-)specify all the Java VM arguments from eclipse.ini as these are replaced and not just appended to...

So the following will likely not work as intended:

# eclipse -vmargs -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=...

But, there is an easy way around this: if you also specify --launcher.appendVmargs before -vmargs, then the following arguments are appended to the Java VM arguments.

# eclipse --launcher.appendVmargs -vmargs -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=...

There are still some limitations to what you can do, as the Java VM in some cases use the first occurrence of an argument, so you cannot replace an argument from eclipse.ini this way.

See the Eclipse Wiki for all the glory details.


Changing lanes...

After 5 year and lots of fun with Eclipse, I will start as a Systems Architect in a local Danish company from September.

A major part of my new job will be to introduce Eclipse technology into the products so I basically moves from being Eclipse service provider to being a consumer. As such, I expect to stay in the ecosystem, and I might meet you all again at EclipseCon...

If any of you have an interest in my training materials, please let me know. I have training modules that covers most aspects of plug-in development including a set of modules for the most common modeling techniques... The very basic parts are yours for free - I have already started to put them on slideshare with more to come - but the more advanced stuff is for sale.

So Long, and Thanks for All the Fish [*].



Testing a Plug-in - User Interface Testing

This post is part of my "Testing a Plug-in" series.

Previous posts are:
Assume you have a plug-in with a number of views, editors, dialogs and wizards - just how do you test the functionality of the SWT controls in one of these?

Please also note that I use the opportunity with this blog series to rewrite, test, refactor and generally pretty up my old test tools. Within the next couple of weeks, I will put the tools put on github for general use, but until then, I will include the relevant parts of the tools in the posts...

As always, I'm very interested in feedback so please feel free to comment with changes, fixes, enhancements....

Different Approaches

This is an areas where there are a number of very different approaches.

You can drive the application from the outside using a tool that basically acts as user. The main problem with this is, of cause, that even smaller changes to the layout of views of dialoga can mean you have to redo major parts of your tests.

You can synthesize the events sent to the application using Display.post(...). In this case the coordinates for the controls can be calculated based on the actual positions of the controls and thus this approach is less sensitive to many changes in the layout than the "outside" tester. Note that it is not always easy to get the Event to be posted right, and there are some rather complecated stuff involved - e.g. to ensure that two mouse clicks are considered two "single" clicks and not a double click you have to know the "double-click time" and wait if needed... Also note that especially the Cocoa edition of SWT have had some very serious problems with key support for Display.post(...) in the past - though these should be fixed with Eclipse 3.7.

You can drive most Controls directly using the SWT API and get the same result as if a user had made changes to the control directly. Note though that there are many smaller differences between the supported operating systems as to what exactly happens when you do this, so only the most common operations should be done this way. Also, there are many operations that cannot be performed this way - e.g. selecting a push Button - where the Display.post(...) method must be used instead.

I had a presentation at Eclipse Summit 2010 about this and you might want to have a look at this for further information.

Eclipse Summit Europe '10 - Test UI Aspects of Plug-ins

View more presentations from The RCP Company

Different Tools

There are many different tools available as well. One official tool is SWTBot though personally I prefer the set of methods I have developed over time, which have a very simple approach to UI testing - see the slides above.

How To Do It

Below, you can find three small examples on how to test some very basic UI related stuff using the second and third approaches above. This is based on the following very simple view. In this view, you can type in characters in the first text field and then press the "Copy" button, at which the text from the first Text field should be copied to the second Text field. As always, you can find a zip here with all the code.

In order to utilize the third approach above, we must have access to the controls of the view. This is done using either public or package protected fields in the view class. If you think this will the make the fields too vulnerable, then consider adding access methods for each of them. I usually make them package protected, which means only Java classes in the same package can access the fields. As only Java classes in this plug-in and in any attached fragment, can ever do thing, we are reasonable safe - or at least safe as you can ever get with fragments.

  • we create fields in the view that are package protected
  • we use a test fragment - as described elsewhere
  • we put our tests for the view in question in a package in the fragment with the same name as in the plug-in
The relevant parts of the view looks as follows:

public class CopyView extends ViewPart {
 /* package for testing */Text myFromText;
 /* package for testing */Text myToText;
 /* package for testing */Button myButton;

 public void createPartControl(Composite parent) {
  final Composite top = new Composite(parent, SWT.NONE);
  top.setLayoutData(new GridData(SWT.FILL, SWT.FILL, true, true));
  top.setLayout(new GridLayout(2, false));

  final Label fromLabel = new Label(top, SWT.NONE);
  fromLabel.setLayoutData(new GridData(SWT.END, SWT.CENTER, false, false));

  myFromText = new Text(top, SWT.SINGLE | SWT.LEAD | SWT.BORDER);
  myFromText.setLayoutData(new GridData(SWT.FILL, SWT.CENTER, true, false));

  final Label toLabel = new Label(top, SWT.NONE);
  toLabel.setLayoutData(new GridData(SWT.END, SWT.CENTER, false, false));

  myToText = new Text(top, SWT.SINGLE | SWT.LEAD | SWT.BORDER | SWT.READ_ONLY);
  myToText.setLayoutData(new GridData(SWT.FILL, SWT.CENTER, true, false));

  myButton = new Button(top, SWT.PUSH);
  myButton.setLayoutData(new GridData(SWT.END, SWT.CENTER, true, false, 2, 1));

  myButton.addSelectionListener(new SelectionListener() {
   public void widgetSelected(SelectionEvent e) {

   public void widgetDefaultSelected(SelectionEvent e) {


 public void setFocus() {


We can test at least three aspects of the view UI here
  • that the first field have focus initially. This is really more a view aspect here, but the same tests should be done for dialogs and wizards, so I leave it in.
  • that controls are R/W or R/O as needed and all controls are enabled
  • that the functionality of the "Copy" button is correct
To do any of this we must first show the view. This can be done with a method from BaseTestUtils. Likewise the view must be closed again after use.

public void before() {
 myView = (CopyView) BaseTestUtils.showView("com.rcpcompany.testingplugins.ex.ui.views.CopyView");

public void after() {

To test that the fist field have focus we can do this:

public void testFocus() {

Pretty single, I would think.

Likewise to test the R/O of al envolved controls, we can do this:

public void testFieldsRO() {
 assertEquals(SWT.NONE, myView.myFromText.getStyle() & SWT.READ_ONLY);
 assertEquals(SWT.READ_ONLY, myView.myToText.getStyle() & SWT.READ_ONLY);


Also, very simple.

To test the functionality, it is a little worse.

public void testFlow() {
 assertEquals("", myView.myFromText.getText());
 assertEquals("", myView.myToText.getText());

 final String TEST_STRING = "test string";


 assertEquals(TEST_STRING, myView.myFromText.getText());
 assertEquals(TEST_STRING, myView.myToText.getText());

First I test both text controls are empty, just the make sure the initial condition is well-defined.

Then we make sure the first control have focus, because that would always be the case in the real world. Not that it matters in this case, but there are cases with focus listeners where it would matter.

Next the content of the first control can be assigned to. This is identical to pasting the string into the control as both the verify and modify listeners are run - though the key or mouse events for the paste commands is not seen.

To press the "Copy" button, we must post an artificial event, as there are no way to do this via the Button API. Luckily BaseTestUtils includes a large number of methods that can help.

And last, we can test the result...

One could test even further... E.g. it is not clear in the test above that the text of the second Control is assigned to or appended to... I'll leave that as an exersize for interested...

To run the tests, select the project and "Run as..." "JUnit Plug-in Test"... In some cases, the test will not succeed first time, and you will have to set the application and plug-ins in the launch configuration first...

One technical note: if you make any changes to the dependencies of the fragment, then remember to all the "-clean" argument to your launch configuration. Otherwise, OSGi will not pick up the new dependency and you will be left with a long and frustrating time debugging the problems.... I should know :-)


Testing a Plug-in - Where to put the tests

This post is part of my "Testing a Plug-in" series.

Say you have decided to test your plug-ins, the first question of cause is, where exactly to put the tests. As it comes, there are a number of possibilities each with its own advantages and disadvantages.

In the following, I assume we are using JUnit 4 as this does not put any requirements on the super-classes. Otherwise, with JUnit 3, our options are a little more limited.

The tests can be put inline in the application plug-ins
  • Pros:
    • This is very easy to manage and understand. Also it is very easy to refactor the code, as you do not have to remember to have any specific test plug-ins or fragments in your workspace when you refactor your code - though it is always a good idea!
    • Your tests will have access to everything - you can even put the test in the same class as the methods to be tested.
  • Cons:
    • The tests will be in the final product, though it is possible to ge the Java compiler to leave out the test methods based on a special annotation. If you remove them from the plug-in during the release phase of the project, you have the additional problem, that what you test is not exactly your release...
    • If your tests requires any specific extensions in plugin.xml, then these will be harder to remove from the plug-in during compilation. This can be needed if you deckare any additional extension points in your target plug-ins and need to test these.
The tests can be put into a separate plug-ins
  • Pros:
    • The tests are separate from the product, so the cons of the previous option are not present here.
    • The test plug-in will have the same access to the exported Java packages as any "ordinary" consumer plug-in.
  • Cons:
    • Because of OSGi and the access rules it impose on bundles, you have only access to the exported Java packages. Of cause you can use x-friends to allow a specific visibility for specific package and friendly bundles, but who wants to do that if it can be avoided.
The tests can be put in an attached fragments
  • Pros:
    • The tests have access to everything as a fragment effectively "runs" inside the host plug-in.
    • Like the previous option, the tests are separate from the product.
    • You can even test new extension points and other other stuff that requires additions to plugin.xml...
  • Cons:
    • A fragment is a slightly different "thing" than a plug-in, and many developers shy away from them for this reason.
    • Some build tools have problems running tests that are put inside fragments - though these problems seems to be gone in the 
I usually prefer a mix of fragments and plug-ins for my tests. The fragments are used for all tests of the internals of the target plug-in and the plug-ins are used for the needed black-box tests for the same target plug-ins.

One area where the plug-in approach is needed, is in order to test that the needed interfaces from the extension points are indeed exported and fully available. One problem you might otherwise see, is that super-classes are internal to the target plug-in which can give some very hard to understand error messages.


EclipseCon 11 Slides uploaded

As promised I have just uploaded my slides and related materials to the EclipseCon website.

Using Adapters to Handle Menus and Handlers in Large Scale Applications
10 Techniques to Test a Plug-in

You find all other uploaded materials on "In the Public" page.

And now I need to get back to the conference!