In 2006, Red Hat saved itself a bit of time and made its way into the server virtualization business by acquiring Qumranet and then offered the IT industry the KVM hypervisor, commercialized as the Red Hat Enterprise Virtualization System.
A few years later, it then acquired open-source solutions provider Gluster and created a file system now known as Red Hat Storage Server. Now the Linux vendor has integrated the two systems in order that they can run side-by-side on the same clusters, uniting computing and storage features on commodity servers.
To be sure, Red Hat hosted a webcast yesterday to talk about the traction that was gained recently in its open source Gluster and Red Hat Storage Server 2.0 solutions, the latter of which launched in June of this year and started shipping in July.
On the webcast, Ranga Rangachari, general manager of Red Hat’s storage division, said that Gluster has over 160,000 downloads and that the community of open source and Linux enterprise application developers was growing at about 160 percent since it invested $136 million to acquire Gluster.
Rangachari added that the company now has over 100 proofs of concept up and running, and that it is working on getting around 30 channel partners up to speed on selling the RHSS product as an alternative to other clustered file systems and disk arrays.
But this could be a tall order in some respects and would be more difficult than getting some key IT vendors to support its Enterprise Linux operating system or Enterprise Virtualization hypervisor. The reason is simple enough– the key server makers who push these two Red Hat solutions have their own storage businesses to protect.
And getting Super Micro to sell the commercialized Gluster clustered file system is easy enough. However, Sirius Computer Solutions and Mainline Information Systems are two big IBM server resellers, so it’s a bit surprising to see them offering RHSS to its enterprise customers.
And we might add that since HP is on the list of thirty partners getting ready to sell the same products is surprising, but if you consider how desperate HP Software is to boost sales and profits, it sure makes a point here.
The other companies cited by Red Hat are smaller and probably not as well known to most of us– CityTech, Groupware Technology, Carasoft, International Integrated Solutions, GC Micro, Software By Design, ShadowSoft, Sigma Solutions and Abtech Systems were also on the short list.
With about 68 percent of Red Hat’s revenues driven by channel partners, it’s a bit difficult to imagine IBM, Dell, HP, Fujitsu, and the other key server players that have a strong desire to sell storage hardware and software enthusiastically embracing RHSS, except in those cases where customers demand it.
Red Hat’s Gluster File System takes all of those individual file systems running on server nodes in one single cluster and exposes that file system as a single global namespace that you can mount as either NFS or CIFS.
RHSS is equipped with the same virtualization management console that is used for RHEV, which is in tech preview in the 2.0 release as is the ability to pipe RHSS into the Hadoop Distributed File System (HDFS) or entirely replace HDFS with RHSS.
Rangachari said on the webcast that RHSS 2.0 integration with RHEV 3.1 had just entered beta testing. No word on when that will be product grade or precisely what integration will mean. But longer term, Rangachari said that Red Hat was working to make RHSS and RHEV run on the same clusters, with virtual machine containers for storage and some of the processing capacity used to drive the RHSS file system.
If Hadoop has taught us anything, it’s that getting computing and storage features on the same physical devices can substantially boost performance.
The interesting thing about RHSS is that is you can use the Gluster File System as an overlay on top of Amazon’s Elastic Block Storage to provide some scalability and resilience across those EBS instances.
Still no word on what performance penalty this brings to EBS, or what it costs to do this compared to running RHSS on internal clusters, however. If you run RHSS on Amazon’s EBS and do the same thing in your own data center, you can move data back and forth between the two, something that a few system admins might find practical in some instances.
In other Linux and open source news
Here’s a question that is pondering on some people that work in the Linux community: could the open source concept eventually contain the seeds of its own destruction? Poul Kamp, a noted FreeBSD open source developer and creator of the Varnish web-server cache, wrote this year that the open-source development model has created an embarrassing mess of software.
After all, open source is software development done in public and for free, as seen with the Linux operating system, while the so-called ‘cathedral model’ describes coding techniques done behind closed doors although the source code is still made public with each new version.
So these are just two small ‘nuances’ to most observers. “A pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT professionals who wouldn’t recognize sound IT architecture if you hit them over the head,” was Kamp’s summary of the ‘bazaar model’.
“Under this embarrassing mess of software lies the ruins of the beautiful cathedral of Unix, deservedly famous for its simplicity of design, its economy of features, and its elegance of execution,” he wrote in a piece titled A generation lost in the Bazaar.
With major Linux updates due in October and this month (such as Ubuntu 12.10 and Fedora 18) and with Microsoft spitting out a platform shift with Windows 8, it’s worth considering– is open-source software doomed to a fate of facsimile, and are there any ways we can truly save it?
By the end of the 1980s, things were looking bad for Unix. AT&T’s former Unix projects had metastasised into dozens of competing products from all the major computer manufacturers, plus clones and academic versions, all slightly different and subtly incompatible – sometimes even multiple different versions from a single manufacturer.
Richard Stallman’s GNU Project to create a free alternative to Unix was moving ahead, but it hadn’t produced a complete operating system because it didn’t have a kernel.
BSD was struggling to free itself from the binding vestiges of AT&T code and as a result also wasn’t a completely free operating system per se. Well sort of.
To escape from the endless spiral of competing proprietary products, programmers and software developers should share their code under licences that compelled others to share it too.
Meanwhile, the IBM PC was quietly taking over, growing in power and capabilities. IBM had crippled its OS/2 product by mandating that it ran on Intel’s 286 processor. This was a chip that hamstrung an operating system’s ability to multitask existing DOS software, even though OS/2 came out after Intel introduced the superior 386 processor, which could easily handle multiple “real mode” DOS apps.
And of course, the field was wide open for Microsoft, which had already had an accidental hit with Windows 3. Microsoft hired DEC’s genius coder Dave Cutler and set him to rescuing the “Portable OS/2” project. The result was Windows NT: the DOS-based Windows 3 and its successors bought Microsoft enough time to get the new kernel working, and today it runs on about 90 percent of all desktop computers used by people all over the globe, many of them in business, and in just about any industry you can think of.
But Windows doesn’t have everything in its own way– it’s always like that. It has two main competitors, and both are some flavor of Unix and have free software in their DNA– on one hand, there’s Apple with Mac OS X and iOS – close relatives and both built on Apple’s Darwin OS, which uses bits and pieces from BSD and free software projects.
On the other is GNU/Linux, the fusion of a free software kernel and the GNU Project’s array of tools and programs. Note the careful use of the term free software, not “open source”; the latter being a corporate-friendly term that came later and it’s not quite the same thing as free software.
The ideals laid down by the GNU Project’s Richard Stallman in 1983 are what made BSD and Linux possible– that to escape from the endless spiral of competing proprietary products, programmers should share their code under licences that compelled others to share it too.
It’s a simple agreement that forces people to confer their rights to others – so the GNU calls it “copyleft” as it uses copyright law to drive the licence.
The issue with this is that not everyone shares nicely. Some people will do the minimum they can to comply with the rules. BSD has one of the original idealistic free software licences. It permits the use of the code so long as a small credit is included somewhere, so many bits of BSD-licensed software are lifted for free and hidden away inside commercial products – and any changes made to the source don’t have to be given back, but about 90 percent of the time it isn’t, and that’s where all the problems lie.
When Unix was just a little child, it was a nice command-line driven OS, built around the idea that everything is a file and small single-function programs could be linked together with pipes to achieve sophisticated results, but results that worked, nevertheless.
But today’s Unix descendants are large, complex graphical beasts, and so are their apps. Any significant modern application is a huge project in and by istelf, which means that large teams of cooperating developers and programmers are just people– just simple and ordinary people like you and me.
Sometimes they do fall out, sometimes they just want to go and do their own thing. The end result: lots of competing projects, and too many people working on them all at once. So here’s the question again: can the open source concept self destruct itself given enough time? Let’s sure hope not, since that would be a crying shame.
In other Linux and open source news
In these challenging times for silicon chip maker AMD, and as part of its initiative to reduce its operating expenses by cutting a lot of its excess staff, Advanced Micro Devices has closed its Dresden, Germany-based Linux research center (OSRC).
The lab focused mainly on the enterprise development of Linux kernels optimized for AMD microprocessors. The OSRC was founded in April 2006 and was acting as a go-between the Linux operating system development community and the global AMD processor design community.
AMD’s OSRC played a particularly critical role in ensuring the support of the next-generation AMD products and solutions from Linux.
OSRC was specialized mainly in OS virtualization, cloud integration, memory management, multi-core scheduling and optimal performance measuring tasks to make better use of future multi-core architectures.
Generally speaking, the laboratory was overseeing security patches to support AMD’s server processors’ features in Linux operating systems such as Red Hat and CentOS.
The OSRC had about 25 employees who helped integrate important changes for new AMD platforms into Linux distributions such as Red Hat Enterprise Linux (RHEL) and also Suse Linux Enterprise. Some of the OSRC kernel developers also worked on open source virtualisation solutions such as the Xen hypervisor.
But some of the Linux developers from AMD Research are also located in Austin, Texas. From now on, the U.S. team will have to incorporate support for all functionality of AMD code-named Steamroller and Excavator high-performance as well as Jaguar low-power cores into the Linux operating system. This should be implemented in the next three to four months said AMD.
Overall, AMD plans to lay off about 15 percent of its workforce this quarter to reduce operating costs, and in an effort to save $1.3 billion in 2012. If that happens, the company could reach the break-even point next year.
In other Linux news
Ask just about any software developer or system integrator and most will tell you that everyday is different in the field of information technology and application development. There’s always something new to learn and that’s an important ingredient that keeps them motivated and excited.
But there’s one area where they’ve been lacking a lot of interest lately and that’s with the GNOME project. If you’re new to Linux, GNOME is a desktop environment and graphical user interface that runs on top of a computer operating system. It’s composed entirely of free and open source software, just like the Linux operating system is.
GNOME is an international project that includes creating software development frameworks, selecting application software for the desktop, and working on the programs that manage application launching, file handling, along with window and task management.
But lately, GNOME has been losing a lot of key players and some in the Linux community are taking this as a major concern. Not that Linux is dying– on the contrary. Linux is very alive and well, growing in popularity at more than ten percent a year, especially in data centres and in mission-critical enterprise applications. And that’s not about to change, on the contrary.
But the GNOME project has hit many road blocks in the last year, and could be ready for a major falling out with developers after the release of GNOME 3.
At the core of the issue is the radical rewrite of the whole desktop. And that’s where the majority of the Linux community draws the line.
So it’s not just a few unhappy users that are ditching the project. Linux application builders and software developers are also leaving as well.
For instance, GNOME project developer Benjamin Otte notes that core Linux developers are leaving GNOME development in droves, and that the project is extremely understaffed, has no specific goals and is losing market share at an alarming rate.
Of course, not everybody agrees with Otte, but the fact remains that he is a well respected member of the Linux community, and as such, many people are following his comments with interest.
But it gets worse– in the last week, two major Linux distributions have jumped ship entirely, preferring to create their own desktops. Canonical’s Ubuntu team has the Unity desktop and popular newcomer Mint Linux has created not one, but two new desktop projects. So Otte does have a point here.
If that wasn’t demoralizing enough for GNOME developers, Linus Torvalds himself called GNOME 3 an unholy mess, going on to add that he’s never met anyone who likes it. And after all, Torvalds IS the creator of Linux.
At times, Torvalds is known to have a few outbursts here and there, but taken together, all those rumors of GNOME being in trouble are starting to paint a picture that is looking more desperate with each passing day.
Source: Red Hat.