Did the Oil Company Miss You?

Yesterday was the unofficial "Gas Boycott" of 2007.  I wonder if the oil industry even noticed.

Considering how Vancouver is said to be a very "green" city, I was expecting transit to have a few more people than usual.  Oddly enough, there were more new cars on the road, and the busses were quite empty both on the way to work and back.  The weather was absolutely gorgeous outside, so why would anyone want to spend the $2.25 on transit just for the opportunity to sit next to people who go out of their way to ignore you?

I wonder who started the email chain calling for the boycott.  It wasn't the same crowd that pretends to be begging our help to illegally smuggle a few million dollars out of the country (for a small up-front fee and all our personal information).  But just like the Nigerians, big numbers were thrown around in an attempt to sell us on "Oooh" and "Aaah" factors.  Here's a copy of the email.  Oddly enough, I managed to get one this time:

Subject: FW: Don't pump gas May 15th 2007

NO GAS...On May 15th 2007

Don't pump gas on may 15th

In April 1997, there was a "gas out" conducted nationwide in protest of gas prices. Gasoline prices dropped 30 cents a gallon overnight.

On May 15th 2007, all internet users are to not go to a gas station in protest of high gas prices. Gas is now over $3.00 a gallon in most places.

There are 73,000,000+ American members currently on the internet network, and the average car takes about 30 to 50 dollars to fill up.

If all users did not go to the pump on the 15th, it would take $2,292,000,000.00 (that's almost 3 BILLION) out of the oil companies pockets for just one day, so please do not go to the gas station on May 15th and let's try to put a dent in the Middle Eastern oil industry for at least one day.

If you agree (which I can't see why you wouldn't) resend this to all your contact list. With it saying, "Don't pump gas on May 15th"


I'm curious to know where these figures came from.  Sure, it's some pretty big numbers, but somehow I doubt that every person on the internet has a car.  And I really doubt that everyone has to fill up each and every day.  If I was paying $30 a day in gas, I'd have to be either very stupid or a member of some royal family.

I don't recall a "gas out" in 1997, but I know there was one in 1999.  Gas prices didn't fluctuate at all according to historical trends and, despite the popularity of the email campaign, very few people took part.  How many people had email in 1999?  A few million?

If, say, a hundred million drivers refused en masse to fill up their tanks on May 15, the total of what they didn't spend could amount to as much as $3 billion. However, it doesn't follow that such a boycott would actually decrease oil companies' revenues by that amount, given that the average sales of gasoline across the entire US is under $1 billion per day in the first place.

Whether the total impact was a half-billion, 3 billion, or 10 billion dollars, the sales missed due to a one-day consumer boycott wouldn't hurt the oil companies one bit.  Think about it.  Every single person who doesn't buy gas on Tuesday is still going to have to fill up their tank on Wednesday, Thursday, or Friday, making up for Tuesday's losses.  Sales for the whole week would be normal, or very close to it.

A meaningful boycott would entail participants actually consuming less fuel -- and doing so in a sustained, disciplined fashion over a defined period of time -- not just choosing to wait a day or two before filling up as usual.  Perhaps the next campaign should focus more on getting people to carpool or taking transit on alternate weeks.  Since I live alone and have nobody to drive around, I gave up my car years ago.  For less than $1200 a year I have relatively unlimited travel in the Vancouver area, thanks to my bus pass and solid knowledge of what's available in the community.  Sure, I can't just hop in the car and go somewhere, but it means that I'm putting less carbon into the atmosphere.  I really dislike seeing a hundred SUVs pass me by where there the only person inside is the driver.  It just reeks of waste.

Having money is just fine.  But try to leave some of the planet behind for the rest of us.

AMD's Phenom Announced

I was wondering when AMD would get around to showing some of their upcoming 4-core technologies.

I've been watching the processor wars from the sidelines and it's clear to everyone that Intel's Core processors have taken back the market lead from AMD's Athlon line of processors.  Despite being a bit slow on the update, Intel's designs seemingly destroyed everything that AMD had to offer for quite some time.  The quad-core processors from AMD were nowhere near ready for deployment, and the Athlon X2's just couldn't keep up with the Core2Duo's.

But this may be about to change.

Today AMD announced their upcoming four-core processors (called Phenom) at the same time as their ATI HD 2900 series power-hungry video cards.  What I like about the Phenom's is that the cores are all built into the same processor, unlike Intel's solution which pairs two Core2Duo processors on a single die.  This will allow for some incredibly rapid execution between the cores and should result in some serious raw processing power.  These things apparently continue to fall into the existing power envelope, which is pretty impressive as conservation seems to be the theme of the year.

While I doubt that I'll make use of such technology in the short term, I'm happy to see AMD come back with some strong technology of their own.  This should keep the innovations from both Intel and AMD coming at a respectable pace, and pushing computing technology to the very limits of the physical universe (as we understand it).

Keep them comin', AMD.  My old Opteron processors have exceeded expectations, and I'm sure these new Phenom's will make my apps fly like Windows 3.1 on a P4/3.2 (am I the only one that's ever tried this?).

The World's Best PDA

HTC Advantage x7501Why can't I find these things in Canada?

I have been looking for a replacement PDA for the last little while now as mine is becoming less and less reliable every week.  So while going around eBay to see what deals I might be able to find (can anyone actually find deals on that site?), I happened to come across this happy little device.

HTC's Advantage x7501

The first thing I thought when I saw the specs on this thing was "zomg, this is everything I've ever wanted in a portable device."  So naturally, I can't find one within the whole of Canada for sale.

Being the geek that I am, I am absolutely in love with some of the specs on this thing.  Windows Mobile 6 is nice, but the 5 inch LCD and detachable keyboard are the primary sellers.  I don't need a keyboard on my portable devices as on-screen block recognizer (Grafitti 1 for Palm users) suits me just perfectly.  And a 5 inch screen ... that's the very size I've wanted for the last few years.  But of course that's not all this little device has to offer.  It also comes standard with WiFi and Bluetooth, both of which are quite important to me and should be on my next device.  Add to that the fact that this computer is also a cell phone with WCDMA, and you have yourself a winner.  Then, just for the added bonus, HTC also threw in a GPS receiver and 8 Gig of storage.

It's as though the device was designed with people like me in mind ... I want one.

The sticker price is a little heavy; about $1100 CDN.  But when you look at everything this little device can do, and what it can offer, the price is most certainly worth it.  I plan on moving to Japan in the next few months, so having a mobile device that can communicate with WCDMA is quite important to me.  Of course it also has the standard Quad-band GSM capabilities, but that will soon be a feature that's only used on vacation.

Considering how my portable devices are now required to last at least two years before replacement, and how my current machine is 3 years old and apparently suffering from a form of digital alzheimers, I think that this machine may be the perfect solution for me.  With the GPS and a set of Japanese maps, I stand less of a chance of getting lost (though it'll happen anyways).  And with the WCDMA capabilities, I should be able to keep my Canadian phone number for just a bit longer so that friends and family don't need to worry about calling international ... even if it means I'm paying roaming fees.

But of course the 5-inch TFT is a huge seller.  The 3.8-inch models I've been using since 1999 have been great, but it's time for a bit more mobile-desk space.

Now, if I can only find a Canadian distributor ....

Analytics vs. FireStats vs. Raw Access Logs

I love data collection, and summarizing that data into useful information.  I've done this on some epic scales with some of my employers recently, and I also enjoy doing it with the data collected on this site.  One thing I have noticed, however, is that Google Analytics often displays very different results from what I'm finding with two other sources, and it makes me question the validity of Google's data.

At first, I was interested in numbers.  But after 8 months of blogging, I believe that this site has pretty much peaked for the time being.  Unless I can offer something of real value to the online community, I don't see my existing numbers changing too much.  So aside from sheer access counts, I've also been seeing what operating systems people are using, what browsers, and (more importantly) where people are accessing this site from.

To share just a little of this data, over the last 8 months I have logged 1.8 million page visits.  Of these, 128,912 have been from real people (as best as I've been able to weed out).  From these 129 thousand people I've learned that 88.3% of them use a variation of Windows, and 3.2% use Mac OSX.  Ubuntu is the most common flavour of Linux seen, with 0.4% of all visitors using that user-friendly OS.  IE is still on top of the market with 72.6% of the share, Firefox with 19.5%, and the remaining 12 browsers duking it out for the rest.

While that data is partially amusing, it doesn't really hold much value for me.  My site will load properly in all the major browsers and I'm content running Windows, FreeBSD and Solaris for the various roles and tasks that my computers must fulfill.  What really fascinates me is the global locations of my visitors, and how the data went completely against my expectations (data rarely ever surprises me at work).

In the first few months of operation, traffic was as expected.  The United States made up the lion's share of my traffic, followed by Canada and Japan.  The occasional hit from Korea, Italy and Mexico would catch my eye, but I had expected people might stumble across this site while looking for something completely different.

However, shortly after I moved my site to ANHosting in January (mainly because my home webserver was seemingly overwhelmed with all the crawlers, and my monthly bandwidth allowances with my ISP were starting to break records) I noticed that I was receiving far more hits from these unexpected countries.  In the same month, I had added Global Translator to my site and the crawlers had a hey-day with this.  Every page was translated into nine other languages, then stored and indexed for future requests on Google, Yahoo, MSN and a plethora of other search engines and universities.  From here, the international traffic took off.  No wonder my little home server was choking....

In the last 90 days, Spain, Brazil and the US have been the three countries to visit this site the most.  Spanish seems to be the language of choice for most people reading my content, which makes me wonder just how accurate BabelFish's machine translation engine really is.  Greece is right behind with Italy, Portugal, France, the Netherlands and Britain trailing behind.  Then comes Japan, Canada, Mexico, Colombia, Ukraine and Korea.  Then there are another 44 countries all sharing the rest of the traffic.

This site is read mostly in Spanish, followed by English, German, Chinese, Arabic and Japanese.  Russian is the least accessed language.

All of this has been gleaned from the raw access logs on the web server, then downloaded into a custom database developed in SQL 2000, and sorted from there.  I've been using IP2Location's IP-Country-Region-City database in order to determine approximately which cities people are in to narrow the criteria down further.  Please note that I don't do any of this for marketing purposes.  I will not have any AdSense ads or anything that remotely looks like an advertisement on this site.  The most that I'll do is offer a link for a product or service that I find useful.  I try to give credit where credit is due.

All this started when I first installed Omry Yadan's FireStats.  I really like this plugin for WordPress (and almost any other site if you know how to integrate it) as it's easy to install, collects data very quickly and displays accurate information that's less than 0.5% different from what I find in the raw access logs for this site.  The differences occur primarily with 404's and downloads.  I can live with this as my raw data lets me know how often people go to a non-existent page, or when one of my plugins are downloaded.

Because FireStats is so in line with these other logs, I tend to use this as my primary source of information.

Yet with all the talk about Google Analytics, I decided to give this a shot just to see if it could provide value that isn't easily available elsewhere.  And while this is mostly true, it also shows data that I cannot confirm in my own raw access logs.

Over the last few weeks, Analytics has shown hits from countries that I'm surprised has access to the internet, let alone time to use it.  According to their data, I've had visits from South Africa, Kenya, Iraq, and Somolia.  South Africa I can kind of understand, but I can't find any South African IP's in my access logs.  Nor can I find any Iraqi, Kenyan or Somoli IPs.  My IP2Location database is right up to date, and while IPs ranges can hop between countries, I can't see this happening often enough that these countries all show up as false positives within the same week.

Using May 10th, 2007 as the base, I took sample of all the traffic between 00:00:00 GMT to 23:59:59 GMT and found that Google was often only collecting 4% of my actual access data.  Thinking that they were blocking out all the crawlers (which makes sense) I then compared the data for the 10th using only valid users, and found that Google was still only showing just under 70% of my expected traffic.  Just for giggles, I then compared the country information between the two sources and found that three countries reported by Google were not found in my access logs.  To further validate whether I had data from the exact same time frame, I examined the access logs in search of these three countries and found that I have not had hits from two of them in more than 14 days, and the other was from the day before.

So I'll give Analytics that one country.  They may not be using GMT in their access logs, and I can live with that (it seems to be PST when I examine, so perhaps the logs are shaped based on the time zone the viewer is in).  But where is this other data coming from?

Aside from the Geolocation map and language graphs that Analytics offers, I do not see much value in this for me.  Maybe if I was taking part in AdSense or some other marketing campaign management where visits and clicks equate to dollars and cents ... but even then, if my raw access logs are showing so much more activity on my site, I wonder just how accurate the dollars and cents reporting from Analytics would be.

I've tried putting the Google Java in the header, footer, and everywhere else on my site in the event some people stripped out the sidebar, but no dice.  I am forced to wonder what value Analytics would offer to businesses if the data collection could be foiled just by a user preventing Java from running on their machine....

So for anyone that hosts a site and would like to know where their users are from or how many hits they receive in a day, I'd suggest using FireStats.  The interface is very clean and it integrates quite easily into WordPress.  If anyone knows why Analytics' data is so different from my access logs and/or FireStats, I'd love to know why.

Edging Ever Closer to the Tropics

Global warming can have some pretty scary side-effects, but one positive note is that the areas of the country that aren't submerged by the rising waters will be able to host a greater variety of farmland.

A computer model developed by Royal B.C. Museum sceintists suggests West-Coast climate conditions could change so dramatically within the century that warm-weather crops such as oranges and avacados could be grown on southern Vancouver Island and the province could become one of North America's primary farming regions.  The Global Climate Model uses historical temperature and precipitation observations to project future climate conditions based on the current rate of greenhouse-gas emissions.

The potential is there for much of the province's marginal pasture lands to become major areas for food production and security.  Of course, this wouldn't be an overnight success story as there are many factors that could get in the way of such production.  Pests and diseases, water availability, other demands on agricultural lands, soil suitability and preparation could all stand in the way of turning much of the land into an agricultural heaven.

The pine beetle has destroyed quite a bit of B.C.'s forests, and as the temperature rises, there are sure to be other destructive insects to get in on the action.  Of course other factors include the preservation of the Agricultural Land Reserve as well as the surrounding lands.

But with the expected 5 degree celcius rise in mean planetary temperature expected this century, I'm curious to know which crops could be realistically grown in the province and other areas of Canada.  According to these computer models, some areas of British Columbia will be suitable for avocados, sugar cane, lemons, oranges, pecans, rice, olives ... the list is quite extensive.  However, if the temperature rise proves to be true, agriculture may be more about sustinence crops such as grains than other goods like grapes and peaches.

I'm also curious to know what human migration patterns will be like in the next half-century.  With some areas of North America reaching 45 celcius in the summer, there are bound to be many people moving farther away from the equator.  Canada has plenty of space to handle an exodus from many equatorial nations, so if we were to become even more of an agricultural powerhouse for the world, I'm sure we could offer plenty of work to those uprooted from their homes.

Not that Global Warming is a good thing ... but at the very least we could offer some opportunity to the people forced from the heat.

[email protected] WordPress Plugin

Okay ... call me cheap, but here's yet another BOINC-related plugin.  I've decided that this will be the last stand-alone BOINC Stats plugin I'll release, and the next ones will allow users to display any of the projects that are currently available without using seperate plugins that are pretty much the same.

But enough of that ... on with the release!

Einstein Stats is a WordPress plugin that displays your current [email protected] Stats.  This was put together mainly because of the recent server failure at the BOINC SETI Project.  Since my computers were sitting idle for far too long, I gave them another task.  I must admit ... Einstein is much harder on the processors than SETI ever was.  It takes just over 26 hours for my pair of Dual-Core Xeon's to get through a work unit each ... which is almost unheard of with my SETI data.

That said, this plugin is a little light on features.  Currently it will display your total work units, average work units, and team name (if applicable).  In the future, I plan on having a user-configurable option to display other data like number of PCs on the project, pending credit counts, and personal standings.

You can download the most current version of Einstein Stats here.

Requirements:

Einstein Stats has been tested on WordPress 2.0.4, 2.0.5, 2.0.7, 2.1, 2.1.3 and 2.2 RC1.

Installation:


  • upload the contents of the zip file to your “wp-content/plugins” directory (be sure to write them to the einstein-stats directory)

  • go to the “Plugins” main menu and find "Einstein Stats Display”, then click “Activate”

  • go to the “Options / Einstein Options” menu and enter your account id, and set the number of hours between stat refreshes


Using:


  • modify the theme file where you wish to display your Einstein stats (usually sidebar.php) and type in the following line:


<php get_emc2_stats(); ?>

Uninstallation:


  • go to the “Plugins” main menu and find “Einstein Stats Display”, then click “Deactivate”

  • delete the files from your “wp-content/plugins” directory


Change Log:


Bug Reports:

As always with initial releases, I’m sure there will be one or two things that I forgot to check.  If you happen to find a bug, please let me know.

Enjoy!

Effective RAID Levels for a Consumer NAS

Sometimes I wonder if we argue with each other for the thrill of combat.  Other times I wonder if the other person will ever shut up.

A while ago I was discussing my plans to construct an affordable NAS (Network Attached Storage) device with around a Terabyte of storage to start, and enough expandability to scale easily over the next few years.  I was discussing the basic design with some people at my local coffee house today when out of nowhere this 20-something guy invites himself to the conversation and starts ripping into my recommendation of a RAID5 array to store the data.

Now, before I get off on yet another rant here, I must say that I'm incredibly surprised by how often this argument comes up in chatrooms, forums, newsgroups, and almost anywhere else geeks and lesser hobbyists get together to talk shop.  I would have thought that with all the hard facts regarding the pros and cons of different RAID levels out on the internet and in various trusted trade magazines, the majority of people would be at least familiar with when to use certain levels, and when to avoid them.

The key ideas behind my custom NAS solution are really quite simple.  The device must be:


  • cheap (under $1000 CDN with initial potential capacity of 1 TB)

  • relatively reliable

  • easily scalable


This is not a very big list of "musts", and it's the first point that I tried to stress the most to this man who usurped an otherwise pleasant discussion of potential storage solutions.  But like many of the people who argue about anything and everything on IRC or 4chan, this person refused to listen to the requirements before deeming everyone within earshot who would not agree with him to be a "complete idiot" ... if I remember his comment correctly.

It was at that point I stopped listening.

Since a blog post can't be rudely interrupted (once posted), here's my reasoning for a Level 5 RAID array on the consumer-grade NAS I hope to build.  If you disagree, feel free to post your opinions and perhaps suggest some alternatives that would keep the cost of the storage server within target.  To keep things even, I'll be voicing not only the pros behind RAID5 (of which there are a few), but the cons as well.

The biggest selling factor behind RAID5 (or RAID6 for that matter) is that it's cheap, and has some basic redundancy in the event a drive fails.  RAID5 and RAID6 both perform very poorly at sequential read/write performance as well as random read/write performance.  RAID 1+0 or 0+1 has excellent capability in both sequential and random read/write performance, but requires more drives and more expensive hardware to be truly worth the effort of building a NAS.  RAID5 again comes up short in terms of availability as it can only lose 1 drive before the data is unprotected (RAID6 allows you to lose 2 drives).  RAID 0+1 and 1+0 can lose up to half the drives in an array without losing data.

As an example if you have two shelfs with 12 drives in each shelf in a RAID 0+1, 1+0 Array with the Mirror sets being across the shelfs and the stripe sets contained within the shelfs you can lose an entire shelf without affecting the operations of the server, RAID 5 or 6 simply cannot survive in this scenario.  How likely is this to happen?  Well ... like everything in life, it's 50/50.  Either it will, or it won't.

But who has this kind of money for a home storage server?  How many nine's do you need for your data at home?

As it is, everything that's on my existing NAS is backed up on DVD.  The main reason I aim to use networked storage is so that I don't need to look through my archive index and then flip through dozens of DVD binders to make use of the files I want.  At the same time, I want my data to be easily available to several machines ... some of which have no access to my DVD archives.  Then of course comes the problem of streaming all that media to uPnP devices.

Where would I put my 107 Gig of mp3s?  What about all the other digital media I have?  How annoying would it be to fish out my mp3 archives on DVD just to listen to seven or eight CDs that I don't want to grab individually?

I'll admit that this can be chalked up to laziness.  If everything is backed up on an optical disc and properly catalogued, it shouldn't be that much of a hassle to fish out the appropriate binder.  But that's not the point.  It's the principle of the matter.

So I don't really need a high-availability system.  If one drive fails, it would be nice to hot-swap a new drive and let the data rebuild itself (this is excrutiatingly painful with RAID5 as it means the server is pretty much unavailable to everyone until it's done).  I did play with the idea of RAID0 to get the maximum storage capacity out of the system, but I don't want to re-load more than a Terabyte should one of the drives fail on me.  JBOD could help me get around this little problem, but it would mean that I'd have to know which files were on that drive to restore that data.

Just because I work with computers for a living does not mean that I want to spend my evenings or weekends reconstructing data.  That's what computers are for.

As it is, I plan on getting two more 320 Gig Seagates and taking the three existing 320's I have and putting them to use in this new box.  Under RAID5, that will give me a Terabyte of storage.  This will all be controlled through a 3Ware card (on-board RAID might get me by, but I don't relish the idea of using quasi-software RAID for this box) running under FreeBSD.  The system will also be configured to send emergency SMS messages to my cell phone should anything fail.

So of one drive dies, I can pick up a new one on the way home for $100 (as of this writing) and let the data rebuild.  If two drives die, well ... that would suck, but I could get two new drives and spend a weekend re-loading the data.  As for the OS, that's going to run on something much sexier than a RAID set.  I plan on using a bootable flash drive for this in order to optimize the power savings.  Once the OS is written, I will rarely ever need to write to the flash card.  A two gig CF would work perfectly, I think.  As it is, my drives are spun down for 10 to 14 hours a day.  Why waste the power?

So that's my plan.  I've studied RAID quite a bit over the last decade, and while I can't claim to know it all, I do know where certain levels can be used, and where others should be avoided.  But every level has an application in today's world.  Some are just better suited than others.

Conclusion:


  • RAID 0: Lack of fault tolerance makes it unsuitable for enterprise applications and risky in most others.

  • RAID 1: OK for OS and Application binaries, if you are considering this for a heavy-load transactional database due to limitations in server capacity, you should consider another server.

  • RAID 5: Poor performance and Fault tolerance make it unsuitable for enterprise applications.  Acceptable for small business or consumer storage needs.

  • RAID 6: See RAID 5

  • RAID 10/01 (0+1,1+0): Excellent Performance/Availability make this RAID Level Ideal for enterprise applications, though a bit pricey for the rest of us.

  • RAID 15/51: Poor Performance, Excellent Availability, performance makes this unsuitable for database applications.  Not widely available.  Not something consumers would ever ask for in their home.


Don't believe me?  Here's some light reading:

NASA's Next Mission to Mars

NASA's Pheonix Mars Lander spacecraft was recently transported to Florida in preparation for its upcoming mission, potentially as early as August 3rd this year.

Keeping with the organizations goals to send new probes to our closest biologically viable neighbour planet, NASA has been building and testing this new device in Denver with the hopes of launching it in time to take advantage of the orbital geometries between the Earth and Mars, thus, saving fuel.  To keep costs low, Pheonix will be using a lander structure and some other components originally built in 2001 for a mission that was cancelled before it even finished the development stage.

I must admit, I really like this approach.  In the past, NASA has had an incredible amount of resources to pull from.  While this is great for science, it can lead to excessive waste.  With all the budget cuts made over the years and the harsher panels convened by the government to account for the billions spent, management has been forced to make the most of absolutely everything.  Hopefully the methods employed here will not be forgotten should NASA ever be given a massive budget for a large-scale future endeavour.

One thing that really surprises me about the Pheonix Mars Lander is that it's going to be a Lander rather than another Rover.  Our understanding of the martian terrain and history has greatly increased thanks to the seemingly tireless efforts of both Spirit and Opportunity, and I would have figured that more of these units would have been sent with more specialized tools and instruments for the various missions (I wonder if a flyer will ever be sent ...).

Yet even with it's stationary placement, the Pheonix will be able to glean more information about Mars' history and potential for microbial life.  It should land in the martian arctic sometime in the spring of 2008 and will soon be scooping the soil found just beneat the surface.  Studies from orbit have suggested that within an arms reach of the surface, the soil holds frozen water.  If this is true, it will be a tease to every pioneering spirit here on Earth.  With a virgin planet to explore and tame, I'm sure many would be willing to rise to the challenge so long as the absolute basics of life could be met.

Minimo - The Next Portable Browser

I love PDA's.  Many of these little devices can do it all, especially O2's XDA Trion mobile powerhouses.  One area where I have noticed a lack of innovation, however, is the browser.

Over the last few years I've worked with various browsers on portable devices and found each of them lacking.  I'd love to find a release of Opera for Windows Mobile 5, but this seems to be next to impossible.  I've heard it is available, so hopefully the memory requirements for the application is a little lighter than on my XP and Vista systems.  So it came as a bit of a surprise when I stumbled across a small browser called "Minimo".

This small application is pretty slick.  It runs on a Mozilla core and seems to be capable of viewing most every site I've visited.  Social Bookmarking, tabbed browsing, SSL/TLS support, and Javascript/AJAX support are just some of the incredible features that this application comes with.  I've been using this on my iPaq for a little over two weeks now and can't believe I've survived with Pocket IE for as long as I did.

I really like the clean interface and intuitive operations.  One of the biggest problems when designing anything for a mobile device is the lack of screen real estate, and the UI designers certainly know how to make this look easy.

As of this writing, version 0.2 has been released, but don't let the small number fool you.  The stability and capabilities of this browser are worthy of a 2.x designation.  If anyone running Windows Mobile 4.2 or 5.0+ would like to replace their existing browser, I'd strongly suggest giving Minimo a try.

The Universe's Brightest Supernova

Galaxy NCG 1260 is some 240 million light years away.  To put that into some perspective, it would take a starship 158,311 years at a constant speed of warp 9 (1.02 Trillion miles per hour) to reach it.  Yet despite this incredible distance, NASA's Chandra telescope has recorded a massive supernova.

This incredible source of destruction was about 100 times more powerful than a typical supernova, and scientists believe that because of the readings, this star may have been about as large as a sun can (theoretically) get ... about 150 times the size of our own second-generation yellow star.

It's long been thought that the first generation of stars were massive, and this particular supernova allows us a rare glimpse into how these first stars died.  That said, finding these massive stars and then witnessing their death is not without challenges.

The star that produced this supernova (SN 2006gy) apparently released a large amount of mass before exploding.  This large loss of mass is similar to something happening to a star within our own Milky Way Galaxy called Eta Carinae, which may be ready to go nova itself.  Although SN 2006gy is technically the brightest supernova ever recorded, Eta Carinae is only 7500 light years away (a little over 4 years at a constant speed of warp 9).  Should Eta Carinae go nova in our lifetime, it would be one of the brightest objects in the night sky for years.

Supernovas usually occur when massive stars exhaust their fuel and collapse under their own gravity.  In the case of SN 2006gy, astronomers think that a very different effect may have triggered the explosion.  Under some conditions, the core of a massive star produces so much gamma radiation that some of the energy from the radiation converts into particle and anti-particle pairs.  The resulting drop in energy causes the star to collapse under it's own massive gravity.

After this collapse, violent runaway thermonuclear reactions occur (similar to hydrogen bomb mechanics) and the star explodes ... sending it's matter in all directions.  This paves the way for smaller second generation stars (like our own) to form, potentially with proto-planetary discs for solar system formation.

I often wonder if perhaps computer programming was the wrong field to get into ... astronomy has always been far more exciting, and I would love to examine the effects of a supernova on the fabric of space.  To have such a massive gravity well suddenly spring forth would have massive reprocussions to the surrounding region ... aside from the obvious.

  1. 1
  2. 2
  3. ...
  4. 273
  5. 274
  6. 275
  7. ...
  8. 282