Ubuntu developers have brought back zRAM and are now using it as part of the default Ubuntu Linux installation in a clever way.
First, for those not familiar with zRAM, it’s a Linux kernel module (formerly called compcache) that tries to improve system performance by using a compressed block device in RAM in an effort to avoid swapping/paging on disk.
The zRAM kernel feature is intended for systems with low amounts of system memory. With the Linux 3.8 kernel, the zRAM feature will leave the kernel’s staging area.
Overall, while RAM is still rather inexpensive these days, there’s still users and developers in the Linux community that want zRAM to be used by default within Ubuntu.
With a few cases, such as for the Ubuntu Nexus 7 and other certain ARM images, zRAM is already deployed but it’s not being done so at this time within the stock Ubuntu x86/x86_64 installs.
The zRAM original kernel configuration option is present and there’s the zram-config package within the Ubuntu package archive, but the Ubiquity installer isn’t configuring it for use. The zRAM capability is mostly a win for netbooks/mobile devices and other cases with very restricted amounts of RAM.
The Ubuntu zRAM discussion was brought back up yesterday on the Ubuntu Devel Discuss list. The discussion is still active, but if zRAM is to be deployed within Ubuntu on a larger scale, it will likely come down to a feature that’s determined at install-time.
If the system’s available RAM is below a certain amount, the zRAM feature could then be enabled with zram-config while keeping it disabled for those with modern servers having several Gigabytes of system memory.
There’s also this Launchpad bug report going back to 2009 about enabling zRAM / Compcache by default. We’ll see what happens as the discussion continues and whether any change is warranted for the upcoming Ubuntu 13.04.
In other Linux news
Richard Stallman, president of the Free Software Foundation, is calling Canonical’s Ubuntu Linux spyware and calling on the Linux and open source community to uninstall the operating system, get away from the company and give Canonical whatever rebuff is needed to make it stop what it’s doing.
Based of the Debian kernel, Ubuntu is one of the most popular versions of Linux. Stallman is talking about its new network search feature, which he believes spies on Ubuntu users.
Stallman says: “Ubuntu, a widely used and influential GNU/Linux distribution, has installed surveillance code and a sort of malware. When a user searches his or her own local files for a string using the Ubuntu desktop, the operating system sends that string to one of Canonical’s servers. (Canonical is the company that develops Ubuntu.)
Even Canonical’s own CEO Mark Shuttleworth talked about that search feature on his personal blog, prophetically subtitled “here be dragons.” Essentially, searching your files on your own computer is also, by default, an online search, at least according to Canonical.
That online search includes potentially relevant results from Amazon, and if you buy something, Canonical gets a cut. This is not advertising, according to Shuttleworth: “We’re not putting ads in Ubuntu. We’re integrating online ‘scope results’ into the home lens of the dash.” Huh??
That extremely fine, perhaps microscopic distinction has escaped some of Canonical’s customers, who are wondering why, in the first place, a desktop search should be integrated with an online search, and why, in the second place, that online searches wouldn’t be a Google search instead of an online retailer.
As JunCTionS says in a comment on Shuttleworth’s blog post: “Sorry if this is clear to everyone else, but you don’t seem to mention any typical websearch engine. I imagine there are even more Ubuntu users that use Google than those that use Amazon. Will it also search Google?
But for Stallman, the main issue here isn’t advertising per se, although that’s certainly unwelcome in the Linux and open source community. The core problem is the exchange of personal user information, even though Canonical doesn’t send any personal information to Amazon, running the Amazon search query on its own servers based on information that it retains still IS a big issue.
That has failed to mollify RMS, who wrote that “it is just as bad for Canonical to collect your personal information as it would have been for Amazon or Google to collect it in the first place.”
Shuttleworth’s answer seems to be: “Just trust us. (Really?) After all, we control your machine anyways — we have administrator privileges on your computer. We are not telling Amazon what you’re searching for. Don’t worry. Your anonymity is preserved because we handle the query on your behalf.” Ha!
Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err a bit here and there… Hummmm.
That is not very compelling or reassuring if you ask us. In a post on the Canonical blog yesterday, the company addressed the issue again, at least to a certain awkward degree. After running through the new capabilities, searches for the Beatles will bring up their music on Amazon, where it can be instantly purchased without opening a browser, for example.
For its part, Canonical says that privacy concerns have been a primary goal while developing the new service: “Privacy is extremely important to Canonical. The data we collect is not user-identifiable. We automatically anonymize user logs and that information is never available to the teams delivering services to end users. We make users aware of what data will be collected and which third party services will be queried through a notice right in the Dash, and we only collect data that allows us to deliver a great search experience to Ubuntu users.”
Then, Canonical goes on to say “We also recognize that there is always a minority of users who prefer complete data protection, often choosing to avoid services like Google, Facebook or Twitter for those reasons, and for those users, we have made it dead easy to switch the online search tools off with a simple toggle in settings.”
Of course, aside from the issue of how unusual it would be for someone to be searching their own computer for commercially-useful queries like “the beatles,” or “Lord of the Rings movie,” this is unlikely to satisfy privacy advocates, especially in the U.S.
In other Linux and IT news
To say that open source isn’t going through many changes these days would be to ignore reality and what’s been happening in the IT and Linux community in the past three years. After all, it’s rather difficult to fake giving away your software for free, although there were more than a few companies over the years that were called out for being ‘false-open source’.
To be frank, some say that the term ‘open source’ has already outlived its hype to provide real value to the IT industry and its end users. To be sure, it’s been a good slice of life. The dot-com bomb and the ensuing recession of the year 2000 actually placed open source in the American corporate psyche.
Severe budget cuts between 2000 and 2005 led to the rise of open source software (OSS) utilization, mostly by technical visionaries who saw the potential for what they could create using open source building blocks.
And understandably, IT managers approved OSS, seeing that it was completely free or at least very low-cost when compared to the big vendors such as Microsoft and Oracle.
Then Wall Street investors saw the process of creative destruction at play as the first commercial open source leaders-— MySQL, Red Hat and JBoss took away a huge piece of market share from Microsoft, IBM and Oracle’s businesses.
After that, a whole new ecosystem of enterprise OSS firms all over the globe was developed, covering everything from groupware and portals to content management, ERP and CRM.
A number of high-profile acquisitions (e.g., Zimbra for $350 million and JBoss for $420 million) culminated in the sale of MySQL to Sun Microsystems for $1 billion in early 2008. Those companies proved that open source development methodologies could be coupled with profitable business models that commoditized the software but delivered real value to enterprise customers in others forms.
To be sure, MySQL’s dual-license model with a subscription to support and maintenance service was one obvious way to monetize open source software and it then became an early standard in the IT industry.
Nevertheless, we likely would have continued on this trajectory had the financial crisis of 2008 not ground the planet to a halt. Meanwhile, the SaaS (Software as a Service) model has gained significant traction since 2005, and the global meltdown of 2008 only accelerated that process.
Like open source and Linux, SaaS had lower upfront costs than proprietary software did, although, unlike open source, SaaS offerings come with a dangerous risk of long-term lock-in, nevertheless.
So many companies saw SaaS offerings as another method to save money and slotted these in alongside their OSS investments. But the second big recession in commercial open source history also made a lot of companies double down on their OSS investments, and as they did so, they discovered that the true benefits of open source were much more than just saving money on licenses.
Mid-size and Fortune 500 companies found out that open source software was more secure and easier to maintain and drove innovation better than their proprietary counterparts. Several experienced faster time-to-market for strategic business initiatives built using our open source software, and that’s been very prevalent in the industry, especially since 2010.
And governments all over the world have also taken notice, especially in Europe where there’s been a proliferation of new policies that mandate the use of open source in the public sector.
To name just a few, Belgium, Croatia, Denmark, Italy, the Netherlands, Norway, Spain, Sweden and the United Kingdom all have mandated the use of Linux and open source software or open source standards in certain government entities or, in some cases, across the entire segment.
To better realize that the open source segment continues to be so strong this year, you only have to look at the success of Red Hat, which, as the biggest public open source company, reported surpassing $1 billion in revenue in 2012. And the business models are changing– Red Hat doesn’t rely on a commercial license the way MySQL did.
Liferay’s Marketplace is another example. That public apps repository opens up business to communities and partners who will be able to sell various applications built on its open source platform.
Beyond just revenue, the open source movement is thriving as a key resource for some of the biggest names in technology. Amazon, Twitter and Google are big users of open source software in their cloud or cloud-based services. These well-endowed companies could choose any software on the market. The fact that they’re using open source software to power their most mission-critical systems proves open source’s long-term value and overall reliability.
To be sure, open source is also playing a major role in this latest industry darling– “Big Data.” The enormous growth of user activity mediated by connected devices (whether mobile or desktop) has created vast amounts of data that can yield valuable insights into customers if the data is properly searched, analyzed and visualized into actionable information.
Various open source projects, such as Apache Hadoop and MongoDB, are at the forefront of Big Data management. Unlike proprietary software systems, which often are built to the specifics of just a few major customer models, these open source projects are meeting the needs of a broad user community.
A flexible and agile core provides a platform on which a much larger user community can build specialized solutions that fit their specific needs. Subgroups within the community have become a symbiotic ecosystem, contributing to the core project but also creating focal centers for experts to rally around and collaborate on their specific Big Data problems.
But perhaps the biggest impact of open source principles will be seen outside the discipline of software development. Open source has been a bold experiment in producing value through open collaboration rather than closed competition. Even the biggest, richest company on the planet had to take a crowd-sourced, open approach to building up its maps database.
Apple’s own admission recently in relying on users to correct its maps was that certain problems are too big to solve on their own. The open source model might help solve the issues we face as a global community, which increasingly demand this kind of cooperation because they are too big to be solved the traditional way. Or are they?
Ultimately, open source’s very own legacy should be in the way our success paves the way for other industries to maximize value through collaboration. Collaborative approaches can be integrated with scientific research, drug development or manufacturing. With limited natural resources, we can ill afford to work in ‘small islands’ of redundant competition for much longer.
Let’s all seize the enormous opportunities for breakthrough and innovation that lie in working together as a strong open source community. The benefits will be almost limitless and will most assuredly bring lots of satisfaction to all its players, not withstanding the enormous economic benefits that will be attached to such a strong community.