Category: Uncategorised

Java non-blocking servers, and what I expect node.js to do if it is to become mature

node.js is getting a lot of attention at the moment. It's goal is to provide an easy way to build scalable network programs, e.g. build web servers. It's different, in two ways. First of all, it brings Javacript to the server. But more importantly, it's event based, rather than thread based, so relies on the OS to send it events when say a connection to the server is made, so that it can handle that request. The argument goes, that a typical web server "handles each request using a thread, which is relatively inefficient and very difficult to use." As an example they state that "Node will show much better memory efficiency under high loads than systems which allocate 2MB thread stacks for each connection." They go on to state that "users of Node are free from worries of dead-locking the process — there are no locks. Almost no function in Node directly performs I/O, so the process never blocks. Because nothing blocks, less-than-expert programmers are able to develop fast systems". Well, reading those statements makes me think that I have problems in my world which need addressing! But then I look at my world, and realise that I don't have problems. How can that be? Well first of all, because we don't have a thread per request, rather we use a thread pool, reducing the memory overhead, and queuing incoming requests if we can't run them in a thread instantly, until a thread becomes free in the pool. The…

Read more

Sharing, using Social Media Links

Below is some Javascript which lets users share the current page on several social media sites.  It relies on simple anchor tags wrapping images, like those on the top right of this blog.  You could equally use serverside code to fill the link, just my Blog (Pebble) doesn't seem to let me do that (the template doesn't let you add Java tags to the JSP?!).  Note how some use an escaped URL and others don't. --------------------------- var title = escape("Ant's blog"); var url = escape(window.location.href); var facebookLink = document.getElementById("facebookLink"); facebookLink.href = "http://www.facebook.com/sharer.php?u=" + url + "&t=" + title; var twitterLink = document.getElementById("twitterLink"); twitterLink.href = "http://www.twitter.com/home?status=" + window.location.href; var googleLink = document.getElementById("googleLink"); googleLink.href = "http://www.google.com/bookmarks/mark?op=add&bkmk=" + url + "&title=" + title; var diggLink = document.getElementById("diggLink"); diggLink.href = "http://www.digg.com/submit?phase=2&url=" + window.location.href + "&title=" + title; var myspaceLink = document.getElementById("myspaceLink"); myspaceLink.href = "http://www.myspace.com/Modules/PostTo/Pages/?u=" + url + "&t=" + title + "&c=" + title; var deliciousLink = document.getElementById("deliciousLink"); deliciousLink.href = "http://del.icio.us/post?url=" + url + "&title=" + title; var redditLink = document.getElementById("redditLink"); redditLink.href = "http://reddit.com/submit?url=" + url + "&title=" + title; var stumbleUponLink = document.getElementById("stumbleUponLink"); stumbleUponLink.href = "http://www.stumbleupon.com/submit?url=" + url + "&title=" + title;    

Read more

Node JS and Server side Java Script

Let's start right at the beginning. Bear with me, it might get long... The following snippet of Java code could be used to create a server which receives TCP/IP requests: class Server implements Runnable { public void run() { try { ServerSocket ss = new ServerSocket(PORT); while (!Thread.interrupted()) Socket s = ss.accept(); s.getInputStream(); //read from this s.getOutputStream(); //write to this } catch (IOException ex) { /* ... */ } } } This code runs as far as the line with ss.accept(), which blocks until an incoming request is received. The accept method then returns and you have access to the input and output streams in order to communicate with the client. There is one issue with this code. Think about multiple requests coming in at the same time. You are dedicated to completing the first request before making the next call to the accept method. Why? Because the accept method blocks. If you decided you would read a chunk off the input stream of the first connection, and then be kind to the next connection and accept it and handle its first chunk before continuing with the original (first) connection, you would have a problem, because the accept method blocks. If there were no second request, you wouldn't be able to finish off the first request, because the JVM blocks on that accept method. So, you must handle an incoming request in its entirety, before accepting a second incoming request. This isn't so bad, because you can create the ServerSocket…

Read more

Playing with Cache Performance

My current client has a service which connects to an old IBM z/OS application (legacy system). The data centre charges for each message sent to this legacy system, rather than using a processor or hardware based pricing model. The output from this legacy system is always the same, since the calculations are idempotent. The application calculates prices for travelling along a given route of the train network. Prices are changed only twice a year through an administration tool. So in order to save money (a hundered thousand dollars a year), the service which connects to this legacy system had an in-memory least-recently-used (LRU) cache built into it, which removes the least recently used entries when it gets full in order to make space for new entries. The cache is quite large, thus avoiding making costly calls to the legacy system. To avoid losing the cache content upon server restarts, a background task was later built to periodically persist the latest data inside this cache. Upon starting a server, the cache content is re-read. Entries within the cache have a TTL (time to live) so that stale entries are discarded and re-fetched from the legacy system. This cache was great in the beginning because it saved the customer a lot of money, but in the mean time several similar caches have been added, as well as more general caches for avoiding repeated database reads, causing our nodes to need over 1.5 GB of RAM. Analysis has showed that the caches are…

Read more

DCI and Services (EJB)

Data, Context and Interaction (DCI) is a way to improve the readability of object oriented code. But it has nothing specific to say about things like transactions, security, resources, concurrency, scalability, reliability, or other such concerns. Services, in terms of stateless EJBs or Spring Services, have a lot to say about such concerns, and indeed allow the cross-cutting concerns like transactions and security to be configured outside of the code using annotations or deployment descriptors (XML configuration), letting programmers concentrate on business code. The code they write contains very little code related to transactions or security. The other concerns are handled by the container in which the services live. Services however, constrain developers to think in terms of higher order objects which deliver functionality. Object orientation (OO) and DCI let programmers program in terms of objects; DCI more so than OO. In those cases where the programmer wants to have objects with behaviour, rather than passing the object to a service, they can use DCI. If objects are to provide rich behaviour (ie. behaviour which relies on security, transactions or resources), then they need to somehow be combined with services, which naturally do this and do it with the help of a container, so that the programmer does not need to write low level boiler plate code, for example to start and commit transactions. The idea here, is to explore combining a service solution with a DCI solution. Comparing SOA to DCI, like I did in my white paper, shows…

Read more

DCI Plugin for Eclipse

The Data, Context, and Interaction (DCI) architecture paradigm introduces the idea of thinking in terms of roles and contexts. See some of my white papers for a more detailed introduction into DCI, but for this blog article, consider the following example: a human could be modelled in object oriented programming by creating a super huge massive class which encapsulates all a humans attributes, their behaviours, etc. You would probably end up with something much too complex to be really maintainable. Think about when a human becomes a clown for a kids party; most of that behaviour has little to do with being a programmer, which is a different role which the human could play. So, DCI looks at the problem differently than OOP and solves it by letting the data class be good at being a data class, and putting the behaviours specific to certain roles into "roles", which in my DCI Tools for Java library are classes. Certain roles interact with each other within a given context, such as a clown entertaining kids at a birthday party. The roles which belong to an interaction are part of the context, and in DCI the context is a class which puts data objects into specific roles, and makes them interact. The context and its roles form the encapsulation of the behaviour. I have updated my library, so that there are two new Annotations, namely the @Context and @Role annotations. The @Context annotation is simply a marker to show that a class…

Read more

Dynamic Mock Testing

Have you ever had to create a mock object in which most methods do nothing and are not called, but in others something useful needs to be done? EasyMock has some newish functionality to let you stub individual methods. But before I had heard about that, I had built a little framework (one base class) for creating mock objects which stubs those methods you want to stub, as well as logging every call made to the classes being mocked. It works like this: you choose a class which you need to mock, for example a service class called FooService, and you create a new class called FooServiceMock. You make it extend from AbstractMock<T>, where T is the class you are mocking. As an example: public class FooServiceMock extends AbstractMock<FooService> { public FooServiceMock() { super(FooService.class); } It needs to have a constructor to call the super constructor passing the class being mocked too. Perhaps that could be optimised, I don't have too much time right now. Next, you implement only those methods you expect to be called. For example: public class FooServiceMock extends AbstractMock<FooService> { public FooServiceMock() { super(FooService.class); } /** * this is a method which exists in FooService, * but I want it to do something else. */ public String sayHello(String name){ return "Hello " + name + ", Foo here! This is a stub method!"; } To use the mock, you'll notice that it doesn't extend the class which it mocks, which might be problematic... Well, there are…

Read more

User Mental Models

I've spent the last 6 weeks looking into an interesting paradigm called Data, Context & Interaction (DCI). I've written a few introductory papers, and some tools too. DCI has the following goals: To improve the readability of object-oriented code by giving system behavior first-class status; To cleanly separate code for rapidly changing system behavior (what the system does) from that for slowly changing domain knowledge (what the system is), instead of combining both in one class interface; To help programmers reason about system-level state and behavior instead of only object state and behavior; To support an object style of thinking that is close to peoples' mental models, rather than the class style of thinking that overshadowed object thinking early in the history of object-oriented programming languages. The problem recognised by this paradigm is that system behaviour gets fragmented in traditional OOP, over a large number of classes and sub-classes. The code directly related to a use-case gets fragmented and "lost". The boundaries for the behaviour end up being class boundaries and pieces of behaviour are stuck with the data on which they operate, so that the code has high cohesion. If you want to read the code related to a use-case, you will struggle to extract it and understand it. I agree with these problems, but have not really encountered them personally for a long time, because I do most of my work predominantly in the SOA paradigm. When I first read these goals and all the articles I could…

Read more

JAX-WS Payload Validation, and Websphere 7 Problems

A WSDL file contains a reference to an XSD document which defines the data structures which can be sent to the service over SOAP. In an XSD, you can define a Type for an element, or things like the elements cardinality, whether its optional or required, etc. When the web server hosting a web service is called, it receives a SOAP envelope which tells it which web service is being called. It could (and you might expect it does) validate the body of the SOAP message against the XSD in the WSDL... but it doesn't. Is this bad? Well, most clients will be generated from the WSDL, so you can assume that the type safety is respected. Saying that, it's not something the server can guarantee, so it needs to check that say a field that is supposed to contain a date, really does contain a date, and not some garbled text that is meant to be a date. But more importantly, something which a client does not guarantee, is whether all required fields in the data structure are actually present. To check this, you can validate incoming SOAP bodies against the XSD. The way to do this, is by using "Handlers". The JAX-WS specification defines two kinds, namely, SOAP Handlers and Logical Handlers. The SOAP kind is useful for accessing the raw SOAP envelope, for example to log the actual SOAP message. The logical kind is useful for accessing the payload as an XML document. To configure a handler,…

Read more