31 December 2006

Vegas Vacation


We went to Las Vegas for Christmas. The views from the plane as you head into Nevada and near Vegas are spectacular. Rugged, arid terrain shaped by wind, water and time. Of course, Vegas offers a completely man made perspective. Take this set of circular buildings or a view of Las Vegas Boulevard.




We stayed at the Palms. If you've never been to a Vegas casino (like I hadn't), the following might stick out as they did for me:


  • gambling machines everywhere

  • indoor air is filled with cigarette smoke

  • bright lights, flashing lights, and more lights


Indoor and outdoor exhibits were remarkable. They all seemed to come together to put your mind and body into some sort of surreal snow-globe. Suspended animation, time and reality.




The aquarium behind the check-in counter at the Mirage, and the volcanic display that rumbles, roars and spews actual fire across water. And of course, a mermaid (one of a set of two) that greets you as you enter the hotel.




The Paris casino and the fountains at the Bellagio opposite.




Inside the Bellagio, there's more decadence as evidenced by these Christmas decorations, and the bear in the background- made of fresh carnations. And, at Caesar's Palace, a super-sized statue of a gent with combed pubes and (hopefully, for his sake) a cold package.




The light show in downtown. Tongue-in-cheek proclaimed as the widest wide-screen around. And, Toni Braxton's life story being advertised, in pretty grandiose proportions.





If you're in Vegas, you must see the Hoover Dam. Fittingly, it is close to Vegas since it defies all sense of proportionality. I couldn't even fit the whole structure in a frame to shoot with.




The two sides of the dam.




Note the intake towers- each set sits in a different state: Arizona and Nevada, and therefore have clocks with the time corresponding to the distinct time zones on them.




The generator room on the Nevada side. There are two of these- one for each intake tower. The generators are the original ones installed around 1936- pretty amazing.




The first actual survey marker that I've seen!




Vertical pylons won't cut it for ripping power from the generators up the cliff walls. So, they've angled them as needed so that the wires don't scrape and short over the walls. And, if you're wondering why walls would conduct, think about 230,000 volts. The issues you encounter as you scale..




This monument at the dam serves as a memorial to those that died during the construction.




These two signs seem to epitomize the practicality and simplicity about which the dam was built and maintained. Six companies needed to build it, so they were incorporated into one: creatively dubbed Six Companies Inc. No one was buried in the concrete at the dam. Simple math: the concrete could only be poured into sections at 6 inches at a shot. Hard to get smothered or drowned in six inch increments. More math, and nice: the dam cost, and continues to cost nothing to tax-payers. The sales from the power generated pay for all the operations, and have paid for the construction of the dam. Pretty awesome when a project on this scale is completely self-sufficient.




The dam is home to what must be the coolest water fountain around. And, the original furniture at the dam was pretty awesome.




We went to see two shows: Tony and Tina's Wedding and Danny Ganz, both of which were spectacular. I couldn't take pictures at the Danny Ganz show, but I did at Tony 'n Tina's- where you kind of become part of the show, or wedding rather.



My Jersey heritage (weak though it may be) kicked in and I just had to have a picture with anyone wearing a hood ornament.



Joey Ciccone plays Nunzio, the father of the groom. He never got out of character, even after the show when we got him a drink. Fun.



At some point in the show, Nunzio's wife jumped on the bar next to us and went for a stripper routine. And, I'll end with that.

19 December 2006

Combed Niblicks, Fractals, and Immigration

combed niblick

Tucked away in a hall that leads to a men's room in Von Maur, there's a nifty glass fronted mounted on the wall that contains some old golf artifacts. The club head on top caught my eye. It is aptly described as a "Combed Niblick." The golf ball right on top of it is also pretty curious- it has feathers flaring out of it.





Vishal flew into town last week, armed with a presentation on Jini and JavaSpaces. He presented to the OJUG. I think we had the largest turnout since I've started attending the sessions. Vishal's presenting style was novel- not quite presentation zen, but engaging and fun nevertheless.





A trip to Whole Foods on a Saturday night had me staring at the vegetables organized so perfectly in the refrigerated section.





Driving down Dodge St, by the Mexican Consulate, there was quite a ruckus. People were walking around with banners, yelling about immigration, wall building, and the usual rhetoric that accompanies the subject. In the midst of all of this pandemonium, isolated from everyone else, was this woman. Her sign read, "Safe, Sane, LEGAL Immigration. It's the LAW." I rolled down my window, smiled at her and took her picture as she was waving at me.





No more baby teeth for Leela.

16 November 2006

GPL and Java

Java is now governed by the GPL. This happened (fairly quietly) around Monday, Nov 13. As with most things in the open source world, nothing monumental happened with this announcement; no big PR blitz, no CEO chest-thumping, just a polite blog post, and a flurry of articles.

A month or so back, Oracle decided that they'd had enough of their partnership with Redhat, and or Suse, and opted to create their own Linux distro. An Oracle consultant at work claimed that this would eliminate the "finger pointing" between the folks at Oracle and the operating system that their database runs on. Interesting.

Java's gone in a polar opposite direction. A GPL license means that it can be married with just about anything else that has a GPL license. Linux anyone?

After mentioning this to Jeff and Vishal over the last day or two, I saw: Linux Mint.

I hope that's the tip of the iceberg. I can't wait to do this:

sudo apt-get install glassfish


So, perhaps like you can download complex perl apps/web-apps (that gently place themselves in cgi-bin, a'la mapserver), maybe something like cruise-control becomes a one line install/deploy; maybe we get a distro geared toward being a Java container running OS?

Apart from that easing installation and packaging via the license, which some apps were already doing (kudos to henplus), there's the sex-appeal of porting some of the work done on Solaris (another little GPL project) to Linux to help with getting a *X system to play with Java under the sheets a little more. I'd love to see a process listing via ps (or top) show me java threads by individual thread name, instead of just showing up as a child process/thread with the command line invocation as the process name. [Maybe this is just a pipe dream, but I'd sure like it].

I wonder what this does for the JCP and the JSR system though. I really do like the way in which there are clear specs laid out for implementation guidelines. I hope these manage to stick around, and perhaps they might remain successful enough to become a good working model for open source projects. In a way though, I suspect that many projects already adopt a similar system, albeit in a far less formal fashion.

It feels good (even better than before!) to work with Java.

07 November 2006

Leela's Nemesis

Leela managed to vanquish the old vacuum cleaner. Now, a new contender, born of superhuman strength, agility, and scientific endeavor. Not to mention captain-insano suction power. Behold the Dyson.



Leela has a new nemesis.

30 October 2006

Memory notifications in Java

If you're like me, then you probably relied on the venerable Runtime.freeMemory() call to figure out how you application or container was doing. Especially if, (ahem) you might have suspected a memory leak (or two) at some point.

Well, JDK 1.5 has some fancy new stuff that's really nifty.

First off, you get a good picture of what your memory looks like (both heap and non-heap) like so:

MemoryMXBean memBean = ManagementFactory.getMemoryMXBean();
MemoryUsage heap = memBean.getHeapMemoryUsage();
MemoryUsage nonHeap = memBean.getNonHeapMemoryUsage();
System.out.println(heap);
System.out.println(nonHeap);

This gets you some fairly cool (and very bash-script analyzable) data:

init = 33161792(32384K) used = 301960(294K)
committed = 33226752(32448K) max = 512950272(500928K)
init = 19136512(18688K) used = 1913488(1868K)
committed = 19136512(18688K) max = 117440512(114688K)


Now, on to more interactive things:

public class Mem implements NotificationListener {

public void handleNotification(Notification n, Object hb) {
//-- we'll get to this in a bit
}

public static void main (String[] args) {
Mem mem = new Mem();
MemoryMXBean memBean = ManagementFactory.getMemoryMXBean();
NotificationEmitter ne = (NotificationEmitter)memBean ;
ne.addNotificationListener(mem, null, null);

//-- more to come: configure thresholds
}
}


The above class implements the javax.management.NotificationListener interface that defines the handleNotification method. Now, the main method fetches the MemoryMXBean which turns out to also be a NotificationEmitter (javadocs come in real handy). You could actually set up notifications where you define both filters and user objects can be applied and handed into the handleNotification call. We'll be amateurs and opt to filter nothing and not forward any of our objects.

With the notification mechanism in place, we need set up a threshold to be notified on. Expanding the main method:


public static void main (String[] args) {
Mem mem = new Mem();
MemoryMXBean memBean = ManagementFactory.getMemoryMXBean();
NotificationEmitter ne = (NotificationEmitter)memBean ;
ne.addNotificationListener(mem, null, null);

List memPools = ManagementFactory.getMemoryPoolMXBeans();
for (Iterator i = memPools.iterator(); i.hasNext();) {
MemoryPoolMXBean mp = i.next();
if (mp.isUsageThresholdSupported() ) {
// Found the heap! Let's add a notifier
MemoryUsage mu = mp.getUsage();
long max = mu.getMax();
long alert = (max * 50)/100;
System.out.println("Setting a warning on pool: " + mp.getName() + " for: " + alert);
mp.setUsageThreshold(alert);
}
}


We can call for all the MemoryPoolMXBeans, and then check to see if threshold setting is supported. For those that we can configure, we set an alert for when we surpass 50% usage. At this point, you are in some pretty rarefied territory within the VM:

Setting a warning on pool: Code Cache for: 25165824
Setting a warning on pool: PS Old Gen for: 236748800
Setting a warning on pool: PS Perm Gen for: 33554432


So, can we avert the dreaded out of memory condition?

Here's our uber-malicious memory leak:

import java.util.*;

public class Leak extends Thread {

public static boolean keepLeaking = true;

public void run () {
Map map = new HashMap();
int x = 0;
while (keepLeaking) {
String key = new String(""+x*10000);
String value = new String (""+x*243523);
map.put(key, value);
try { Thread.sleep(1); } catch (Exception e) {}
x++;
}
}
}


A call to set keepLeaking to false can avert a sure VM memory issue. So, here's the fleshed out handleNotification method:

public void handleNotification(Notification n, Object hb) {
String type = n.getType();
if (type.equals(MemoryNotificationInfo.MEMORY_THRESHOLD_EXCEEDED)) {
// retrieve the memory notification information
CompositeData cd = (CompositeData) n.getUserData();
MemoryNotificationInfo memInfo = MemoryNotificationInfo.from(cd);
System.out.println(memInfo.getPoolName() + " has exceeded the threshold : " +
memInfo.getCount() + " times");
System.out.println(memInfo.getUsage());
Leak.keepLeaking = false;
} else {
System.out.println("Unknown notification: " + n);
}
}


And, presto:


$ java -Xmx8m Mem
Setting a warning on pool: Code Cache for: 25165824
Setting a warning on pool: PS Old Gen for: 3735552
Setting a warning on pool: PS Perm Gen for: 33554432
Leaks started
PS Old Gen has exceeded the threshold : 1 times
init = 1441792(1408K) used = 3763520(3675K) committed = 4849664(4736K) max = 7471104(7296K)
$


You'd probably want to reset the peak usage a few times and ensure that you see a consistent incursion before doing anything dramatic (give the GC a chance..). But, this is a fairly large leap from the prior polling mechanism that we would have had to adopt to monitor how a VM is doing. Now, we can code the reaction to a situation with set thresholds instead of managing polling threads.

20 October 2006

List.remove

Here's a really simple one:


...
public static void main (String[] args) {
List <String> l = new ArrayList<String>();
String one = new String("one");
System.out.println(l.size());
l.add(one);
l.add(one);
System.out.println(l.size());
l.remove(one);
System.out.println(l.size());
}
...


That gets you this:


0
2
1


Well, it isn't rocket science. This isn't a Set, it is a List. And the method remove says:

Removes the first occurrence in this list of the specified element (optional operation). If this list does not contain the element, it is unchanged. More formally, removes the element with the lowest index i such that (o==null ? get(i)==null : o.equals(get(i))) (if such an element exists).


The block that does all the hard work in the ArrayList class looks like so:


for (int index = 0; index < size; index++)
if (o.equals(elementData[index])) {
fastRemove(index);
return true;
}



So, if you want your output to look like so:

0
2
0


Change your list's remove to look like this:

while (l.remove(one));


Or better yet, use a Set.

19 October 2006

Cats and dog



The baby German Shepard babysat the cats for me last night.

06 October 2006

Playing with NIO

I had heard a lot about how fast NIO was. So, I decided to find out. I picked up my BufferedReader benchmarking program and used it as a baseline. Then, I wrote the following to test how fast NIO ran:


....
File f = new File("USA.txt");
FileInputStream fin = new FileInputStream(f);
FileChannel fc = fin.getChannel();
long size = fc.size();
ByteBuffer bb = ByteBuffer.allocate(2048);
long start = System.currentTimeMillis();
while ( (fc.read(bb)) > 0 ){
bb.flip();
String s = new String(bb.array());
}
long finish = System.currentTimeMillis();
fc.close();
.....


With about a 100 runs, the two stack up pretty evenly to read in a file of about 3 MB.



Note that the first few runs see NIO slightly slower, and then it hits a step and speeds up again. Perhaps being a lot closer to the OS, you can actually see the contents of the file move through different cache layers as we hit it more.. Not sure.

This wasn't what I had expected. Perhaps the overhead on string creation was dwarfing the real gains in NIO. Or perhaps NIO performs better when the files to ingest are larger. So, I ran the benchmarks again with a file of about 200 MB.



NIO saves us 1 second over BufferedReader, which executes at approximately 5 seconds. Depending on whose side you are on, BufferedReader costs only 20% more, or NIO generates a savings of 25%. Regardless, NIO is faster. But, it is a little more complex too.

I don't know that the speed boost (specifically for File IO) warrants a switch to NIO. File locking sure seems nice, and it would be interesting to try and use NIO over a network to see if things change more dramatically there. Another point to keep in mind is that I used the allocate method in my NIO code, not the allocateDirect, which can optimize even further. However, in my scenario, the allocateDirect did not execute without a runtime exception.

To sum things up, NIO ran faster. The larger the file size, the greater the impact. For small files though, NIO could quite easily be overkill, both on the performance end, and the code maintenance end.

29 September 2006

BufferedReader vs InputStream

Java has an IO system that's really flexible for reading and writing data. If you keep NIO out of the equation, then data from an input source typically originates from an InputStream. However, most tutorials and mentors would recommend that you wrap this raw stream into something more appropriate for your needs. With text files, when you want to inhale in Strings (XML falls into this category), you frequently see examples of an InputStream being wrapped up sequentially into a BufferedReader. So, do these abstraction layers for IO impose a penalty on performance?

I reused my big file (USA.txt) and set up a benchmark run with a BufferedReader.

...
File f = new File("USA.txt");
FileInputStream fin = new FileInputStream(f);
BufferedReader in = new BufferedReader(new InputStreamReader(fin));
String s = "";
long start = System.currentTimeMillis();
while ( (s = in.readLine()) != null) {
int x = s.length();
}
long finish = System.currentTimeMillis();
System.out.println(finish - start);
...


The chart shows the performance of 100 runs of the code. It's pretty stable and hovers somewhere between 80 to 100 milliseconds.



Next step: get the InputStream up and running and compare. So, I went with a code block like so:

...
File f = new File("USA.txt");
InputStream in = new FileInputStream(f);
// buffSize configurable
byte[] buffer = new byte[buffSize];
int len = 0;
long start = System.currentTimeMillis();
while ( (len = in.read(buffer)) > 0 ) {
String s = new String(buffer, 0, len);
int x = s.length();
}
long finish = System.currentTimeMillis();
System.out.println(finish - start);
...


One of the drawbacks to the InputStream is that you can either read a byte at a time, or have to supply a byte read size. I started low, with a 16 byte sized array to read into. I could have been cute here, but I opted for simplicity. Here's how the 16 byte sized reads from the InputStream stacked up against the BufferedReader:



Not so hot, but not so surprising either. With such a small buffer size, I was surely choking the read speeds. So, I decided to try the runs with incrementing buffer sizes (32, 64, 265, 512, 1024, 2048 and 2086). And, here's what happened:



As we ratchet up the buffer size, we see performance gains. However, these gains steadily diminish. The most notable detail is that there's no point at which the InputStream read outperforms the BufferedReader. Behind the nice and friendly methods that the BufferedReader provides, there's clearly a huge amount of cool code that bridges the Reader to the InputStream. The cute parts that figure out the best way to read seem to be hidden, and working.

Spend some time reading through performance myths (from the pros), and there's one constant that shows up consistently. Keep it simple. The VMs do a lot of hard work for you, so you can write code that is clean. And, clearly this simplicity doesn't impose a performance burden here. Nifty.

Now, I wonder how much NIO helps...

28 September 2006

StringBuilder vs StringBuffer

As Java 1.5 came out, many of us were eager to get our hands on StringBuilder. As a non-thread-safe version of StringBuffer, one would imagine that it would pack some heat on the performance end for cases where you don't have multi-threaded appends.

To test the relative performance differences, I used a BufferedReader over decently sized text file (622732 words gathered by repeatedly pasting the wikipedia document on USA). Armed with data, I wrote and measured the following loops that merely read in the file and appended to either a StringBuffer or a StringBuilder.


....
for (int x=0; x<100; x++) {
File f = new File("USA.txt");
BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(f)));
String line = "";
StringBuffer sb = new StringBuffer();
// StringBuilder sb = new StringBuilder();
long start = System.currentTimeMillis();
while ((line = in.readLine()) != null) {
sb.append(line);
}
long mid = System.currentTimeMillis();
String s = sb.toString();
long done = System.currentTimeMillis();
System.out.println((mid - start) + " " + (done - mid));
in.close();
}
...


As evident, I merely switched the constructor to a StringBuilder when I was done. The measurement dumps out the total file read and append times as well as to hit on the toString method.



Clearly, they both perform on almost the same lines. At Java One last year, there was lots of talk about how well the Sun VM was handling synchronization. This is proof.

But here's the dicey part. When we did a mass search and replace on our application, we found a massive boost. So, were we dreaming?

I was convinced that my findings were bogus, and that the overhead imposed by the readLine call on my measurement was hiding a performance difference. I was wrong. I changed the loop to measure like so:


...
long read = 0;
while ((line = in.readLine()) != null) {
long t1 = System.currentTimeMillis();
sb.append(line);
long t2 = System.currentTimeMillis();
read += (t2 - t1);
}
...


No difference. I'm happy about this because we need not start hunting down StringBuffer and switching it to StringBuilder like crazy. But, I'm perplexed about what we saw earlier. So, I decided to try and execute the same test on a JRockit VM (1.5 spec). Here's what I got:



JRockit seems to jump in performance steps- almost like the VM is adjusting to the code. Notice the step like decrease in append times for both the buffer and the builder, and the toString is clearly faster than Sun. But, there's something to be said for the sheer predictability of the Sun VM too.

So, my conclusion? Don't race to switch StringBuffer to StringBuilder- there doesn't seem to be a real tangible difference in performance.

26 September 2006

entry zero

In looking up the term recursion on wikipedia, I found a nifty item: The Droste Effect. MC Escher appears to have made use of the technique.