Wednesday, October 29, 2008

EclipseWorld

This week I have been speaking at EclipseWorld in Reston, VA. I have been talking about Ruby on Rails mostly, and a little session on iPhone (web) development. Most of the developer here are Java developers, so they look at Rails as a way to make their jobs easier. It's a grim economy, and they are being given the classic task of doing more with less. They look at Ruby on Rails as a way to do just that. More features. Less code. Happier developers!

EclipseWorld has been a great conference to speak at. What has been very cool for me is interacting with developers outside of Silicon Valley. Now don't get me wrong, if you are a developer, especially a web developer, then Silicon Valley is the place you want to be. I would compare it to working on Wall Street if you are in finance, or working in Hollywood if you are in show business. It's not for everybody, but it presents the chance to prove that you've got what it takes, and, if you are lucky, a chance to make a lot of money.

However, in the Valley it is easy to forget that most web development is not about creating The Next Big Thing. In the Valley, @tychay will rip you up for using Rails because it fails at web scale. On the east coast, I have met a lot of developers creating internal web applications, or maybe customer service applications. These are applications that can run just fine a single multi-core box, even with the database running on the same machine. They aren't stressing out over page weight, database partitioning, or terabytes of cache. They are creating sophisticated applications, and always have way more feature requests than they have time.

These are the people who are most empowered by tools, especially Eclipse. They don't have some huge team with specialists for doing the CSS or tuning the database. They do it all themselves, and Eclipse makes that a whole lot easier. I've written a lot about things you can do with Eclipse, but this experience has really put things into better perspective.

Wednesday, October 22, 2008

AjaxWorld 2008

AjaxWorld was this week, and it was interesting. I think the down economy is having an affect on everyone, but there were still a lot of interesting things to learn about. On Monday, I did a talk on a favorite topic of mine, networked applications. The talk was a lot of fun, hopefully the audience would agree with that assessment. Overall though, I would say there were a couple of major themes at AjaxWorld this year.

1.) Comet. There were a lot of talks about some form of data push from the server to the browser. Kevin Nilson did a nice job of differentiating Ajax (infinite loop of XHR polls) vs. Comet (long poll.) The folks at ICEFaces have built some nice abstractions on top of Comet. There was also a lot of interest around WebSockets, especially the work by the folks at Kaazing. A duplexed socket connection to the server sounds great on paper. I think there will be some very interesting technologies that grow around that.

2.) Don't make me learn JavaScript! There seemed to be a lot of folks advocating the "only know one language" approach to the web. In most cases that language was NOT JavaScript (even though the Jaxer guys say it can be.) Vendors liked Oracle and ICEFaces preached abstractions that shielded the developer from JavaScript. Of course the GWT folks say do everything in Java. Microsoft says use Silverlight so you can do everything in C#. Of course one of the best sessions was by Doug Crockford who told everyone to man up and learn JavaScript. I tend to agree with Crockford, even though I prefer ActionScript...

Wednesday, October 15, 2008

Recession and The Valley

Are you a programmer who recently moved to Silicon Valley? Are you nervous about what things will be like now that the greater economy has gone pear shape? Then listed to this old man tell you all about his experiences in the last recession in the Valley.

I moved to the Bay Area in 2000, right as the last great recession was getting started. Like this recession, it was happening in an election year and it really hurt the incumbent party. I started off working at a very small start-up in San Francisco called InternetElements. They had a cool idea. Allow any bank or even large organization provide the tools to their members to buy and sell stocks. Remember that back then the stock market was really hot, much more so than it has gotten in the last few years. New companies sprung up all the time and went IPO. Everybody bought into the IPO and made crazy money. It was great.

Anyways, once the stock market started tanking and IPOs disappeared, nobody had much stomach for InternetElements' idea. I was employee #4 there. Our CEO basically told us that we had cash in the bank to make payroll up until a certain date. This was about a month before that date. There were talks going on to get us either some more money or do a merger with another company that had more money, but neither of those worked out.

I started looking for a new job. This was fall of 2000, so things weren't too bad just yet. I didn't have much experience, but I had a degree from a (among the tech world) well known school. So that opened doors for me. I found a new job two weeks before the old one was set to die, and started at the new one on the Monday after InteretElements stopped promising to make payroll. The two founders of the company continued on trying to salvage their company, but there were no hard feelings at all.

My next company was called RMX. I would work there for the next two years. It was also a start-up, but was sort of a spin-off from Chevron. I was there throughout the recession. When I first started, we were expanding pretty quickly. They had hired a lot of consultants to build the initial site and needed to replace them with full-time employees. There was also a grand vision of a big company with an large and intricate org chart.

That vision died pretty quick. Soon the hiring stopped, and then some layoffs started. Tech companies everywhere were struggling. We knew there would be no additional rounds of funding. So we had to become profitable to sustain ourselves. Our management were great. They explained everything in detail at all times. We met as a company every Friday morning. Management talked about how much money we had in the bank, what our burn rate was, and what kind of sales prospects we had.

We worked really hard at RMX. We closed deals with new customers. We drastically cut costs by replacing licensed software with either open-source or in-house built software. By the spring of 2002 we were profitable with about $4M in the bank still. Everybody felt pretty good about themselves. We had survived the recession, even when it got amplified by the aftermath of 9/11. Or so we thought...

In May of 2002, our board of directors voted to shut us down. The reasoning? Even though we were profitable, the recession had changed the landscape they thought. Our ceiling was much lower, even though our risk was also now very low. We weren't a worth investment, so decided to liquidate us.

Half of the company was laid off within a week of that. The other half was kept around. We had customers and contracts with those customers that required us to help them transition to not using our service. Basically each customer got a copy of our code so they could run our service for themselves, on their own hardware. So all of the engineering folks were needed for the transition, but obviously sales and marketing folks were not.

Everybody laid off got one month's pay as severance. Everybody who was not laid off and staid until the end got one month's pay severance, and got a bonus for staying to the end. I was one of those folks. It was a pretty good deal in some ways. I had a job, while it was understood that I would look for a new job. However it was pretty depressing. Half the folks in the company were gone, including a lot of friends. Everybody still working knew their own end was in sight. The closer it got, the more stress people felt.

I wound up staying until the very end. We had a big party on the last day. It was nice, but it was pretty upsetting for me. After two years with a start-up, I had a lot of emotional investment. I was too shook up by it to even say proper good-byes to everyone.

The next week I went on unemployment! I had COBRA papers ready to file when my health insurance ran out. And I did a lot of job hunting. I felt a lot of desperation to find a new job and took the first offer that came my way. That was a mistake in hindsight. I wound up only going one week without a job.

The new job was a contract and it was doing code in C#. I was intrigued about learning a new language, as I had only done Java, Perl, and a little C++ previously. I was a total gun-for-hire at this job, and I was not used to that. I had been an integral part of a start-up for the three years prior to that, and I did not adjust well. Luckily after four months, I found a job at Yet Another Startup: KeepMedia, now MyWire.

The worst of the recession was over in 2003, but things were not peachy. I started working at KeepMedia in February and we launched that summer. It was a great company, and I was back in the kind of role I liked. Things did not take off like we wanted. I think that had more to do with the business plan then the economy, but who knows. We never had any layoffs or anything like that there. But we did everything on the cheap, and I do mean cheap. Our biggest expense was an Oracle database. We were scared to put people's credit card numbers in a MySQL database.

Anyways, that was an interesting experience too. It was a stat-up that started in a recession. What was a little different about KeepMedia is that we were funded by a single person, Louis Borders. We did not have a certain amount of money in the bank and there were no plans to seek VC funding. That would have been tough to get anyways at that time. But there was still huge emphasis on saving money at all costs. We were very creative at doing that. I learned a lot of valuable lessons by having such constraints placed on the systems I built.

I wasn't at KeepMedia very long. That's a long story in itself, and I hated to leave. However, by the time I left, the recession was officially over in The Valley. Let me summarize some lesson that I learned back then:

1.) Start-ups are still start-ups. They are no better or worse just because there is a recession going on.
2.) However, if you are at a start-up, it becomes even more important to know what the heck is going on.
3.) You should still be picky about your job. Don't let the recession force you into a job you hate. Now the recession can force you into that situation, i.e. you are running out of money, etc. But don't put yourself into that situation artificially.
4.) If you do find yourself in a bad position, don't be afraid to make a change.

That covers the professional side of things for me. When it comes to personal things, honestly the last recession did not affect me negatively. Rent prices dropped a lot during that recession, mostly because they were way too high before it. I never took a pay cut and I never had any money in the stock market other than my 401K. So in many ways my buying power actually increased during the recession. Now if I had been unemployed for a long stretch... well obviously that would have been a lot different.

Will this recession be even worse? I actually don't think it will be worse for The Valley, just because the last one was so bad. This one looks like it will be worse for the country at large, and maybe it will last longer. The last recession was four years solid in The Valley, though maybe less elsewhere.

Monday, October 13, 2008

ActionScript Vector Performance

Flash Player 10 is coming out this month. This weekend I went to Adobe's FlashCamp. It was a lot of fun by the way, and a big thank you must go to Dom and the folks at Adobe for a great event. Adobe really treats developers well. Anyways, there are a lot of great new features in Flash Player 10. Other folks will talk about 3-d and text engines and Pixel Bender, etc. but me? I'm excited about Vectors.

If you are like me, then the first time you heard about the new Vector class in ActionScript, you thought it might have something to do with graphics and what not. However, I was overjoyed to learn that it was like the Vector from C++ and Java, i.e. a list-style collection. Like the Vector from the STL and in Java 1.5+, it is parameterized. Even better, you can specify a fixed length for it. In other words, you can say that is a collection that only allows one type of object and that there is fixed number of objects in it. This leads to excellent performance optimizations that are possible, and indeed Adobe has taken advantage of this. Of course I had to test this out for myself.

One of the other nice things about FlashCamp was that they gave us a new build of Flex Builder. This one has support for the new language features supported in Flash 10. To enable the new features, you just change the Flash Player version that you target and voila! I took some benchmark code that I had used for Flash/JS comparisons. Here is the original code:

private function testArray():Number{
var startTime:Date = new Date();
var arrStr:String = null;
var arr:Array = new Array();
var i:int=0;
for (i=0;i<=94;i++){
arr.push(i);
}
for (i=0;i<=arr.length;i++){
arr.push(arr.pop());
arr.sort().reverse();
arr.push(arr.splice(0,1));
}
arrStr = arr.join();
return (new Date()).getTime() - startTime.getTime(); }

And here is the new version that uses a Vector instead of an Array.

private function testVector():Number{
var startTime:Date = new Date();
var v:Vector.<int> = new Vector.<int>();
var arrStr:String = null;
var i:int = 0;
for (i=0;i<=94;i++){
v.push(i);
}
for (i=0;i<=v.length;i++){
v.push(v.pop());
v.sort(comp).reverse();
v.push(v.splice(0,1));
}
arrStr = v.join();
return (new Date()).getTime() - startTime.getTime(); }

Do you like the Vector syntax?

Anyways, back to the results. The Array averaged around 90 ms on my MacBook. The Vector code averaged around 20 ms on my MacBook. 4.5x? Very nice.

One thing I immediately wondered about Vector was if the Flash compiler erased the type. At first glance, there is no reason to do this. It is a different type, so there is no backwards compatibility issue. It does not extend Array, but it is a dynamic class. The documentation states that the push() method does not do type-checking at compile time, but does do this at runtime. This seemed weird, but it would imply that type information is not erased, since it can be checked against at runtime. However, in my testing I could use push() to push any object into any Vector, and had no compile time or runtime errors.

Keynes vs. Hayek, The Final Round

This is it. Much of the last century has been a showcase of two divergent schools of thought in economics: Keynes and Hayek. Keynes ruled up until the late 70's. The Hayek school found believers in Thatcher and Reagan, but they were both compromised. At best a compromise was struck, with attempts at "supply side" economics that was close to Hayek's Austrian school, along with more Keynesian monetary policy.

Now we have the kind of financial implosion that Austrians have all said was an inevitable consequence of Keynesian monetary policy conducted by central banks. Governments have responded with extreme measures -- extreme Keynesian measures. Austrians aren't willing to say that this won't work, but do say it is only delaying an even worse fate.

The Austrians are smart folks, but they don't like to be measured and tested. They denounce any kind of objective, scientific measurement of their ideas. But they cannot avoid this one. This is it. If you are a follower of Hayek, then you must agree that we will see economic hardship on a grand scale within the next ten years or so. How grand? Again the Austrians will never give you numbers, but you gotta figure we're talking Great Depression kind scale. That would be 25% unemployment, western governments collapsing, democracy giving way to totalitarianism. If we don't have something like that in the next decade, just a run of the mill recession, then the Keynesians (and most of civilization) win.

Wednesday, October 08, 2008

MLB Final Four

What can you say about the Cubs ... but wow. Statistically they were favorites against the Dodgers. Most people would have said that they were "heavy" favorites, since they had the best record in the NL. As an Atlanta Braves fan, I can tell you how little that matters. Statistically there was a 48% chance of the Dodgers winning, vs. 52% for the Cubs. That being said, there was only about a 10% chance of a sweep...

Of course those numbers are based on season statistics, and many would point out that the Dodgers were a much better team with Manny Ramirez on the team. Is this true? Their record was 29-24 with Manny vs. 55-54 without him. They outscored their opponents 249-214 with Manny, which would translate to a ridiculous 40-13 expected record. Even with 53 games, you see the craziness of small sample sizes... The Dodgers actually gave up slightly more runs per game with Manny than without him, 4.04 vs. 3.98. So the improvement really was in the offense. They scored 4.7 runs per game with Manny, vs. 4.14 without him.

So, Viva la Manny? The small sample size skews things, but they sure look like good picks for the NLCS. The Phillies were a better team in the regular season, but nobody is as good as the Manny Dodgers. You're not going to find me picking the Dodgers. The Braves were in the NL West for a long time, so I learned to hate the Dodgers many years ago. Of course that's only gotten worse since I moved to the Bay Area nine years ago.

So what about the ALCS? Boston is statiscally a better team than Tampa Bay. What is unusual too is that these two teams had strong home vs. away stats. Both teams were much better home teams than road teams. Tampa Bay won the AL East, so they have home field advantage. Could the home team win every game in this series? Even with these teams it is statistically unlikely, but the home team bias suggests that this series will be very close.

By the way, it should be no surprise that the ALCS is between two AL East teams. Six of the top AL hitters in terms of runs created were from the AL East. Ten of the top twenty hitters in terms of runs created per 27 were also from the AL East. Eight of the top fifteen AL pithcers in terms of ERA were also from the AL East. And it's not just Boston, Tampa Bay, and New York. Baltimore and Toronto also had very good hitters (Nick Markakis, Aubrey Huff, Alex Rios) and pitchers (Roy Halladay, Jeremy Guthrie).

Sunday, October 05, 2008

October Talks

October is going to be a busy month for me. Next weekend I will be at Adobe's FlashCamp. I will be there Friday night and Saturday, and I may do a short session on TwitterScript, the ActionScript API that I maintain. In particular I want to talk about some of the authentication wrinkles present in TwitterScript and its forked brothers.

On October 20, I am speaking at AjaxWorld. I am going to be talking about a subject near and dear to me, Networked Applications. I'll be talking about why you shouldn't waste the power of your servers building HTML strings but why you should instead start using things like jQuery, GWT, or Flex to cash in on the power of your user's computers.

The week after that, I will be on the east coast speaking at EclipseWorld. On Day One, I am doing a day long, introductory workshop on Ruby on Rails. Of course I'll also talk about how Eclipse can help you out. On Day Two, I am doing two talks. One ties in to the previous day's workshop and is about RadRails. The other session is on ... iPhone development. Kind of a strange topic for me. Chris Williams from Aptana was supposed to do both sessions, but couldn't make it. So Aptana asked me to fill in for him. Hopefully they won't wind up regretting that decision!

Friday, October 03, 2008

Feed of Shame

What do you do when you are on Facebook and notice this in your feed?
Palin? Really? I can understand people supporting McCain. If you are pro-war, then you should be pro-McCain. If you are in a very high tax bracket, then it is in your best interest to vote for McCain. There are other rational reasons as well, and of course there is the old standby "he's not as bad as the alternative."

But what would make you support Palin? Are there women who she appeals to, despite her extreme anti-abortion stance? Maybe you like her views, that is reasonable. But then to have as a spokesperson for your views is ... embarrassing to say the least:


Tuesday, September 30, 2008

WSDL in ActionScript

One of the advertised features of Adobe's Flex Builder is that it works with web services. Indeed, in any project you can import a WSDL and Flex Builder will generate lots of code for you. The resulting generated code states that the code generator is based on Apache Axis2, and it looks like it. This is mostly a good thing.

This is ok for a single developer or even a small team. Once you get to larger scale development, you usually want to keep code generated artifacts separate for the source code that generated them. Often you never want to check-in generated code. Why? Because then you have two sources of truth: the artifact (WSDL in this case) and the generated code. You don't want to have to keep these things in sync manually, you want your build process to do it. So you don't check in the generated code, and your build system generates it instead.

So ideally the code generation in Flex Builder could be invoked outside of Flex Builder. This may be the case, but so far I have no luck in this. It is certainly not a documented part of the SDK.

I looked for an alternative and found wsdl2as. This looked promising, but did not work out. First, it expects you to send in XML literals when sending SOAP messages. Sure it generates the boilerplate around the core message, but if I wanted to work directly in XML, I would not have bothered with a code generator. It has an option that seems designed to deal with this, but did not. Even worse, it does not handle anything except the simplest WSDLs. The first WSDL I tried with it defined complex types for the input parameter and return type of the web service. This caused wsdl2as to choke, as it expected any type information to be inlined. Sigh.

Monday, September 29, 2008

At Season's End

The regular season of Major League Baseball is at an end. That is always a bummer to me. One of the reasons that I like baseball so much is that it is played every day. Every day something interesting happens. Of course the playoffs are here, but there is not much joy in those for me this year. No Braves. No A's. No Giants. At least there are no Yankess or Mets, though...

It is always fun to look back at the season, and of course, to speculate on the future. Who should win the awards? And, who should win in the postseason? Being a numbers man, the awards are the most fun to examine.

AL MVP
This is a close race because there are no outstanding candidates. In fact, top AL hitters were significantly weaker than NL hitters this year. If Lance Berkman or Chipper Jones was in the AL, you could make a very strong case for them as MVP... Let's look at a couple of relevant stats. First, runs created:

1.) Grady Sizemore, 128
2.) Josh Hamilton, 122.8
3.) Dustin Pedroia, 120.2
4.) Nick Markakis, 118.4
5.) Aubrey Huff, 116.5

That is a nice advantage for Grady Sizemore. One reason for the advantage over the other players is that he played a lot and lead off, leading to a lot of plate appearances. Still he had a very good season. Who would guess that a lead-off hitter would have 33 home runs and 98 walks? Perhaps he should not be hitting lead-off... A more weighted number is runs created per 27 outs. Here is that top five.

1.) Milton Bradley, 8.97
2.) Alex Rodriguez, 7.89
3.) Kevin Youkilis, 7.8
4.) Carlos Quentin, 7.67
5.) Nick Markakis, 7.42

Only one hold-over from the previous top five, and that is the very underrated Markakis. Perhaps he is the MVP? Perhaps. The other leaders in total runs created are all in the top eleven in runs created per 27 outs. For a final measure, let's look at the top 5 in VORP.

1.) Alex Rodriguez, 65.6
2.) Grady Sizemore, 62.7
3.) Dustin Pedroia, 62.3
4.) Aubrey Huff, 58.4
5.) Josh Hamilton, 57.1

Another very different top five! Even missing some games, A-Rod provided the most "value" for his team. Don't tell Yankee fans this, as I am sure they are working on a way to blame their postseason absence on A-Rod. I can just imagine "Ah, Moose got us 20 wins, if only A-Rod could have hit some!"

From a pure statistical consideration, Milton Bradley was the most "potent" hitter, but only played 126 games. Throw him out, and it sure looks like you would have to go with A-Rod as MVP, once again. If I had a vote, that is who I would go with.

That is not going to happen, and everybody knows it. People like to vote for players who are on "winners". You have to be clearly the best (and even that is not good enough often) to get a MVP trophy and be on a team that is not playing in October. So the people they list are folks like Boston's Pedroia and Youkilis, as well as Justin Morneau and Joe Mauer from the Twins. If Carlos Quentin had not broken his hand during a temper tantrum, he would surely be a front runner. The other name I've heard is Francisco Rodriguez, from the Angels.

Given that, it would seem that Pedroia has the advantage over the other "candidates."

NL MVP
This one is a little easier. Albert Pujols lead the league in all of the stats mentioned previously. He was clearly the best hitter in the league, and nobody is really arguing this one. Ryan Howard's .251 average pretty much guaranteed that he is not in the mix. He is the only guy with "traditional" stats (HRs/RBIs) that beat Pujols, and he plays for a division winner. He also finished very strong, just as his team did, coming from behind to pass the Mets in the last month. But there's no chance of this argument working! Let us hope not at least...

AL Cy Young
This is viewed as a two horse race between Cliff Lee and Roy Halladay. That is good, but that is how it should be. They were far and away the two best pitchers in the AL. Nobody was even remotely close. Most people think that Lee will win because, well because he is a winner. His 22 wins jumps out. He also led the league in ERA. It is rare for a pitcher to lead in both of those stats and not win the Cy Young. For what it's worth, he led the league in VORP as well, edging out Halladay. You can make nice arguments about how pitched against weaker compettition, but it's hard to imagine too many people buying that. Cliff Lee should win and will win.

NL Cy Young
Now this is more interesting. Once again a lot of people think it should be a two-horse race. Once again they are right, but they've got the wrong horses. Most people think it is between Brandon Webb and Tim Lincecum. This may indeed be the two "finalists" for the award, but it should not be that way. Webb was nowhere near as good as Lincecum. He just has a lot more wins, and people get carried away over wins. So Lincecum should be Cy Young, right?
I won't argue against it, especially since I root for the Giants against most teams. However, there is a guy who has been just as good, and maybe even a little better than Lincecum: Johan Santana. He edged Lincecum in ERA, and in VORP (73.4 to 72.5.) Statistically, over the course of the season, he was worth about one extra run (total) more than Lincecum. By comparison, Cliff Lee edged Halladay by about 3.5 runs in VORP.
If you start making the "they played for a winner" argument, then clearly Santana has the edge over Lincecum. You can take that one step further. The Mets were battling the Phillies for the NL East crown this weekend. On Saturday they sent Santana out on short rest and he delivered better than you could hope for by throwing a complete game shutout while striking out nine. I think "clutch" is an illusion, but most people belive in it and I am sure they would say that Santana was as clutch as it comes. He definitely did everything he could to get his team in to the playoffs.
So if people were talking about Lincecum vs. Santana, I would guess they would pick Santana. But they are not. They are only mentioning Lincecum vs. Webb. Lincecum is the clear choice there. Personally if I had a vote ... I would vote for Santana. He has been a little better. The NL East is much better (in terms of hitters) than the NL West.

Thursday, September 25, 2008

The Great Bailout

"OMG! The _____ is in trouble! What are we going to do!!!?!"

When government people say things like this, it is always a precursor to the government proposing itself as the solution to the problem. The problem is so dire, that only the government can solve it. Of course they will need more money and more power to solve the problem. Oh, and if you don't think this is all true, then you are too dumb to understand the problem or you are just un-American because you don't care about all of the Americans who could be hurt by this grave danger.

Mr. Dave Winer makes the point that the current administration has used this argument before. Only then it was Colin Powell making the case for war in Iraq. Now it is Henry Paulson doing the same thing but with regards to the banking meltdown. Dave is right on all of this. He then goes out of his mind by suggesting that Bush/Cheney should resign, Nancy Pelosi be made President, and Paulson's plan to move right ahead. The problem is not just Bush/Cheney, and Pelosi is definitely not the solution. The problem is Paulson's request for power and money. It's like saying it would have been ok to listen to Colin Powell and attack Iraq, but only if Al Gore would have been president. It didn't matter who was President, attacking Iraq was wrong in every possible way. 

Of course Ron Paul has some interesting things to say about the bailout. His opinions are largely grounded in the Austrian economic theory that the government makes business cycles more extreme (bigger booms and bigger busts) by causing malinvestments, like buying subprime mortgages for example. Like all things in Austrian economics, it is a matter of "belief" as these are statements that are purposely impossible to scientifically verify. However, it is hard to dispute that the U.S. government has encouraged high risk loands for the purpose of buying real estate, and that the very financial institutions who did this most are now the ones that are going bankrupt.

The point is that our government does not have a good track record here. Maybe it has been the main source of the problem, as Paul suggests, or maybe not, but it certainly has been part of the problem. Now it wants unprecedented (in this country at least) power and money to solve the problem that it has been at least complicit in. Given that, how can we support this idea?

Oh, but what is the alternative? I don't know, and I don't think the government knows either. Yes, there will be banks that go under. Does that mean that we'll all be out of money? No, of course not. Anyone's savings are already guaranteed by FDIC. Not to mention that even in the case of bankruptcy, creditors (that would be people that bank borrowed money from, i.e. depositors) have first priority. Nobody is going to lose their savings. 

But surely there will be other disasters, right? If so many go out of business, how will we get loans for houses, cars, or new businesses? Well perhaps not all of the banks will go out of business. Certainly there are those that have been buying up these insolvent banks. Or maybe other companies will take the opportunity to expand into the banking vacuum created by the insolvent banks. I'm not sure, but I'm not willing to let FUD from the government convince me to give the government the kind of virtually unlimited power that they are asking fo.

Tuesday, September 23, 2008

No SharedObjects Allowed

Client side storage by the Flash player (SharedObjects) has several advantages over traditional client side storage, a.k.a. HTTP cookies. From a security standpoint, it is better because the data is never sent over the wire. However the main advantage to most people is that it is bigger, and when it comes to managing data on the client, size definitely matters.

By default you get 100 KB instead of the 4 KB you get with cookies. If your application tries to store 101KB, it won't fail. Instead the user will be prompted to increase the allocated space by a factor of 10, i.e. from 100 KB to 1 MB. Of course you probably don't want the user to ever see this screen. One of the other advantages of SharedObjects is that people don't delete them. People blow away their cookies all too often, but most people would have no idea how to do the same with SharedObjects. The only you would find out would be if you saw the Flash player settings screen, i.e. the interface that appears when a Flash application tries to go over the 100 KB default limit.

So stick to under 100 KB and all is good, right? Not so fast. The settings interface requires that your Flash app is at last 136x213. If it is smaller than that, then what happens? First let's explain what happens when it is big enough to show the settings interface. When you flush data to local storage, a string is returned with a status. Here is typical code for this.


var testSo:SharedObject = SharedObject.getLocal("test", "/", false);
testSo.data.testValue = "test";
var soStatus:String = testSo.flush();
if (soStatus != null){
switch (soStatus){
case SharedObjectFlushStatus.PENDING:
testSo.addEventListener(NetStatusEvent.NET_STATUS, someHandler);
break;
case SharedObjectFlushStatus.FLUSHED:
break;
}
}

There are two possible return values, either "pending" or "flushed." There is no fail. So if you were flushing 101 KB, then you would get a pending return value. Now all you can do is what for an event, or more precisely a NetStatusEvent. This will tell you if the user allowed you to increase the size or not. If not then the NetStatusEvent will come back with a failure code.

If there is not enough space to display the settings interface, then you would think that you would just get an automatic failure, but you don't. Instead you get a "pending" from the return of flush. It's not really pending, since the user can't actually choose to allow it to succeed. It can only fail. But the player pretends this is not the case and that the user denied you request. So you need to still listen for the NetStatusEvent. If you don't catch that event, then it will cause the Flash player to throw an error to the user, and of course you do not want that. Here is a picture of that.


Monday, September 15, 2008

Death Magnet



Last week, Metallica released Death Magnetic. Your opinion of it seems to have been determined approximately 17 years ago. That is when Metallica released their self titled or so called "Black Album." For some people, this was Metallica's sell-out album. They went from being a cult favorite to being mainstream. Nevermind that they had already multiple gold and platinum records prior to the Black Album, but no one can argue with the success of the Black Album. It has always been hip to criticize that album and everything after it, and to praise everything before it. If you are hip like that, then obviously you won't like Death Magnetic. On the other hand, if you thought the Black Album was a big improvement for Metallica, then you will love Death Magnetic.
Personally, I like the Black Album and I like Death Magnetic. It is definitely in the vein of other recently successful rockers of the 80s/90s, like U2, R.E.M., and the Red Hot Chili Peppers, in that it "channels" a lot of their classic material while still sounding modern. The guitar playing is impressive, and in many ways the whole thing felt like it had been inspired by the Guitar Hero video game (which I also love to play.) In fact Death Magnetic can be downloaded and played on the XBox 360 and Playstation 3, but unfortunately for me, not the Wii...

Monday, September 08, 2008

Scala ArrayStack

I had not done any Project Euler problems for awhile, so I decided to solve one yesterday. I was also planning on attending the next BASE meeting, so I wanted to brush up my Scala. Thus it was time to solve Problem #47 in Scala.

The solution got me a little more familiar with some of the data structures available in scala.collection.mutable. In particular I needed a structure to hold a list of factors. I decided that ArrayStack was the best choice. Here is my solution:


package probs
import scala.collection.mutable.ArrayStack

object Euler47 {
def main(args : Array[String]) : Unit = {
val start = System.nanoTime
solve(4)
val duration = System.nanoTime - start
println("duration=" + duration/1000000.0)
}

def solve(n:Int):Unit = {
var i = 2
while (i > 0){
var j = i
while (j<i+n && numFactors(j) ==n){
j += 1
}
if (j-i == n){
val msg = (i until j).foldLeft(""){(x,y) => x + y + " "}
println(msg)
return
}
i += 1
}
}

def numFactors(n:Int):Int = {
var factors = new ArrayStack[Int]
var i=2
var m = n
while (i <= m/i){
while (m % i ==0){
if (factors.size ==0 || i != factors.peek){
factors += i
}
m /= i
}
i += 1
}
if (m != 1){
factors += m
}
factors.size
}
}

I was very pleased with the performance, solving the problem in about 0.4 seconds on my MacBook. I saw a similar, but not as good Java solution on the message boards that ran in 1.5 seconds. That solution added all of the factors repeated times and then had to loop through them again to get rid of duplicates. I ran it on my MacBook and it ran in 1.1 seconds. Even when I "fixed" it, it still took about one second. I am sure I could have done a lot of work to it and got it as fast as Scala, but why bother.

Thursday, September 04, 2008

JavaScript Faster than Flash

This is the last benchmark for awhile. Well, at least for today. I converted the JS benchmarks to ActionScript and tested them. The result were surprising, as JavaScript in Safari 4 and Firefox 3.x edged out Flash:


A few notes. I could not convert all of the tests, as two of them (the DOM and Ajax tests) were predicated on browser specific code. I could have done 'equivalent' functionality in ActionScript, but it did not appropriate for comparison. Otherwise the code was translated as is ... for the most part. I did add static type information where possible. There were also a few APIs (on Date and Array) that had be tweaked slightly. I tested similar changes to the JavaScript. The only test where there was any effect was the Date test. The JavaScript used Date.parse, which does not exist in ActionScript. The Date constructor does the same thing. If I switched to using the Date constructor in JavaScript, it was just slightly slower.

It certainly seems that much of the performance advantage enjoyed by Flash upon arrival of Flash Player 9 has been erased. Flash had a strong advantage still in more mathematical calculations (dates, integer and floating point arithmetic) as well as string manipulation. It did very poorly with arrays and regular expressions. I would guess that as the JITs for JavaScript get better, the string advantages will disappear. Flash will probably maintain an advantage in more mathematical computations, especially given its vector graphics features. Hopefully advances in JavaScript will spurn Flash VM progress.

Notes
1.) Tested on both Flash 9 and 10 RC2 on both OSX and Windows. Negligible performance differences in any of the permutations.
2.) Also tested with Silverlight, but only on Windows. It was slower than everything except IE7. However, that was because it was terribly slow at regular expressions and error handling. It clearly had the best JIT as it was able to reduce some of the tests to 0 after a couple of executions.

Distractions

Distractions are everywhere. Some people say that Ron Paul is a distraction. Is Sarah Palin a distraction? Or maybe it was Hurricane Gustav. I say that the economy is a distraction.

The focus of the election has become the economy. The economy is important, right? For two years in college, I actually double-majored in economics. If I wouldn't have been so lazy during my senior year, I would have a degree in it. However, it is not the most important issue in this election year, at least not to me. That distinction is still the war.

Sometimes other libertarian leaning people question me for voting for Democrats. I always say that I would rather have my economic freedoms violated than personal freedoms. In one case I am broke, in the other I am in jail. I don't want to be broke, but I really don't want to go to jail. There are worse things than jail, namely death. U.S. foreign policy has been dealing out death in a big way over the last eight years. War is worse than any economic or personal freedom violations. Of course war actually cause these violations as well.

Look at the Patriot Act. Clearly a war-time measure that is one of the most egregious violations of personal freedom in the checkered history of the United States. Look at our budget deficit and how much money we are spending on wars. Go beyond that and look at the weakness of the dollar and the problems that is causing.

If you keep looking, you'll soon notice the price you pay for gasoline. How much did gasoline cost before we started waging war in Iraq? I know better than most that correlation does not imply causality, but what do you think the price of gasoline would be today if the United States never invaded Iraq?

If we gasoline was in the $2/gallon range, the deficit was a fraction of what it is currently, and the dollar was stronger, do you think the economy would be much of an issue at all?

There is a price to pay for war. We have tried to push all of that cost to our children in the form of budget deficits, but it has not worked. We are paying it at the pump. We are paying it at the grocery store. We are paying it when we buy "cheap" goods at Wal-Mart.

War is the most important issue. The only hope for less war is to vote for Obama. I wish Obama would pull all of our troops out of Iraq and not even leave behind any bases. I am frightened that he will expand military activities in Afghanistan and maybe Pakistan. He is not a perfect choice, by far. But in the interest of Country First, he is the only responsible choice that I can make.

JavaScript Benchmarks, now with Chrome

As promised yesterday, I did the JS benchmarks again on a Windows machine so I could include Google Chrome. I tried to be pretty inclusive, adding in IE7, IE8 beta 2, Firefox 3.0.1 (current release), Firefox 3.1 with and without JIT, Safari 3.1 (current release), Safari 4 beta, Opera 9.5 and Chrome. This was all run on my workstation, a 4-core, 3.2 GHz box with 8 GB of RAM. Any add-ons, extensions were disabled. Here is the pretty picture.


Once again Safari is the kind. Safari 3.1 beats everything except for Safari 4 beta, which crushes even Safari 3.1. Opera was a little slower than Safari. Chrome was generally comparable to the various Firefox browsers, but overall slightly slower. Like Firefox 3.1+JIT, it was very on error handling! Of course IE was the slowest by far, but at least IE8 is faster than IE7. Maybe IE8 is shipping with debug symbols included (as Microsoft has often done in the past) and the release candidates will be much faster than the betas. Or not.

Anyways, Chrome, and its V8 engine, does well, but does not seem to be ahead of Firefox and is certainly behind Safari and Opera. Maybe they can do better on the Mac!

Wednesday, September 03, 2008

More JavaScript Benchmarking

My old boss sent me this link about Google Chrome performance. It's a good read. It includes a link to an interesting JavaScript micro-benchmark. It included some interesting findings on Chrome vs. Firefox 3, Safari 3.1, and the new IE 8 beta 2. I was curious about some other browsers, namely Firefox 3.1 beta with and without JIT, Safari 4 beta, and Opera 9.5. Of course I made a nice picture of my results.


Interesting results. First off, FF 3.1 with JIT did not crash. It crashed so many times on me yesterday, that I was sure it would crash on this. Even though it did not crash, it was barely faster than FF 3.1 no JIT or FF 3.0.1. In fact, it was really only faster at error handling and the same on everything else. Apparently errors are easy to JIT for TraceMonkey!

Next, Safari 4 beta is fast. If you look at the link above, Safari 3.1 was already the fastest thing out there, so I guess this should not be a surprise. It crushed everything and it did it on the kind of tasks that real developers do a lot: array and string manipulation, regular expressions, and DOM manipulation (technically not part of your JS engine, but practically the most important test.) I am not used to seeing Opera lose when it comes to any kind of benchmark. If you throw out the array manipulation, it and Safari are pretty close.

I will have to boot up Parallels and try out Chrome vs. Safari 4 beta vs. FF 3.1 beta on Windows.

Tuesday, September 02, 2008

Firefox 3.1: Bring on the JIT

Web developers everywhere are excited about Firefox 3.1. Part of that is because of CSS improvements, but the big reason is because of TraceMonkey. This a JavaScript engine with a JIT that uses trace trees, a pretty clever technique to turn interpreted JavaScript (read slow) into compiled native (read fast.) JIT is a big part of why VMs like the Java VM and the CLR are very fast, in general much faster than VMs that do not JIT like in Python, Ruby, or (until now) JavaScript. It is why JRuby is faster than Ruby. Thus the prospect of making JavaScript much faster is very exciting.

Recently I had done some micro-benchmarking of JavaScript performance vs. ActionScript/Flash performance. This concentrated on XML parsing only. Now the ActionScript VM is a JIT VM. In fact, Adobe donated it to Mozilla and it is known as Tamarin. It has been Mozilla's intention of using this for JavaScript in Firefox for awhile, as JavaScript is essentially a subset of ActionScript. TraceMonkey is based on Tamarin, but it adds the trace tree algorithm for picking what to JIT. The trace tree approach allows for smaller chunks of code to be JIT'd. For example if you had a large function, like say a single script that runs when the page loads, then with a traditional JIT you either JIT the whole function or not at all. Now what if that function has a loop that runs dozens of times, maybe populating a data table for example. With a trace JIT you can JIT just that one critical loop, but not the whole giant function. So it should be an improvement over Tamarin and thus ActionScript. Of course there is only one way to tell...

So I repeated the same XML parsing tests that I did for Firefox 3.0 and Safari 4 (beta). First, I had to enable JIT in Firefox. One of the links above describes how to do this (open about:config in FF 3.1, look for the jit.content option and set it to true.) I restarted FF 3.1 just to make sure this took effect. I then ran the tests. The results? Not much difference between FF 3.0 and 3.1b+JIT. FF 3.1b+JIT was about 4% faster, which is probably statistically negligible. It was still 6x slower than ActionScript and almost 3x slower than Safari 4.

So what went wrong? Not sure. Here is the code that gets executed in my test:

function load(){
var parser = new DOMParser();
var xml = {};
var start = 0;
var end = 0;
var msg = "";
var results = document.getElementById("result");
var li = document.createElement("li");
initReq();
req.open("GET", "LargeDataSet?size=50", true);
req.setRequestHeader("Connection", "close");
// use a closure for the response handler
req.onreadystatechange = function(){
if (req.readyState == 4 && req.status == 200){
msg = "XML Size=" + req.responseText.length;
start = (new Date()).getTime();
xml = parser.parseFromString(req.responseText, "text/xml");
end = (new Date()).getTime();
msg += " Parsing took: " + (end-start) + " ms";
li.appendChild(document.createTextNode(msg));
results.appendChild(li);
}
};
req.send(null);
}

Pretty simple code. I manually execute it 20 times. It would sure seem like it could be JIT'd. What gets timed is just the parser.parseFromString(...) call, where parser is a DOMParser. Maybe that object cannot be JIT'd? Maybe there is a bug with the JIT that will be resolved in the future? It does seem to suggest that TraceMonkey may not always be the slam dunk everyone expects.

I was surprised by the results. I thought that FF3.1 would be faster than FF3. I didn't think it would be faster than ActionScript in this case, but I thought that it might be close. In many other cases, I expect ActionScript to still be much faster than TraceMonkey. Why? Well there is one other ingredient in VMs like the JVM and CLR that make them fast: static typing. This allows the VM to make a lot of other optimizations that work in combination with JIT'ing. For example, knowing that a particular variable is a number or a string allows the VM to inline references to that variable. This can eliminate branches in logic (if-else statements, where maybe the else is not possible.) The JIT can then take place on the simplified, inlined code, and be about as fast as possible.

If you read about some of the techniques used in TraceMonkey, it tries to do a lot of the above via type inference. So in some cases TraceMonkey and the AVM2 (ActionScript VM) may be able to do the same level of optimizations. In fact, given its tracing approach, TraceMonkey may be able to do better. But I am guessing that there will be a lot of situations where AVM2 will be able to do more optimizations just because of the extra information it has at its disposal in the form of static typing.

Sunday, August 31, 2008

Recent Other Writings

Last week, IBM published an article I wrote on using JRuby on Rails with Apache Derby. It concentrates on rapid prototyping/development. I didn't get too heavily into the IDE side of things, but when you add RadRails into the equation it really is nirvana-ish development. Very fun.

I've also been writing a lot on InformIT about Java Concurrency in Practice. I did some fun stuff over there too, like try to turn some Project Euler code into parallel code. I guess technically that succeeded just fine, but is a good example of when parallel code is not any faster. In this case, the algorithm was CPU bound anyways. Even having two cores didn't really help much. Oh well. I treated it like a strength exercise back when I took piano lessons.