Sandboxing from a SELinux/Mac developer
Friday, November 4, 2011
Who am I
I'm in a unique position with regard to the current Mac App Store sandboxing debate. I'm a Mac app developer who, until recently, was an SELinux developer. SELinux is a Linux access control mechanism that can confine applications in a similar way to Apple's sandbox mechanism. So, I've been on both sides of this debate.
A bit of history
To understand some of the problems with the current sandboxing mechanism for apps on OS X, it's important to understand the basics of the history of sandboxing on OS X. The story actually begins with SELinux. SELinux is an access control mechanism that lets you control all application interactions on a Linux system. You do so by writing a policy. These policies can allow anything at a very fine grained level. Consequently, they are known for being complicated, which has long been SELinux's biggest black eye. People complain that it's too hard to use, and many application vendors simply recommend turning SELinux off rather than taking the time to write a policy to make their application work.
Despite its complications, SELinux was a giant step forward in securing systems, and many people quickly wanted similar functionality in other systems like Mac OS X. So, some guys started working on adding similar capabilities to FreeBSD and Darwin (Mac OS X's open source core). They created the MAC Framework (MAC here stands for Mandatory Access Control), which allows enforcing these sort of access controls. They then built SEBSD and SEDarwin on top of the MAC Framework to allow developers to create SELinux-like policies on FreeBSD and Mac OS X.
Apple then took this work and incorporated it into Mac OS X. I am not privy to how it went down inside Apple, but they ended up taking the MAC Framework but deciding not to use SEDarwin. Instead, they built their own policy engine, which was initially called seatbelts before being renamed to simply sandbox. This was released as part of Leopard in 2007, though it was not used very much then. Users and developers could then sandbox their applications, and even write flexible sandbox policies tailored to their application (though I'm guessing I'm one of the only people to do so).
The big change in Lion is the addition of sandbox entitlements. Entitlements are abilities that an app can say it needs. The system will then create a sandbox policy for the application to run inside of based on the entitlements requested. Lion includes a short list of high-level entitlements which map to sandbox rules. My guess is that Apple decided that writing sandbox rules would be too complicated and error-prone for developers, so they made entitlements as a simple checklist abstraction on top of sandbox rules.
Much of the debate about sandboxing Mac App Store apps centers around the many apps that cannot work within sandboxes or that will have to drastically change their core functionality to do so. This is not a fundamental flaw in sandboxing. You could write a sandbox policy for every application out there. The problem is that Apple's entitlements abstraction is not nearly as flexible as the underlying sandbox mechanism. This prevents huge numbers of apps from working with sandboxes today.
This limitation is not surprising at all. On the SELinux side, I worked on several large efforts to create simpler abstractions on top of SELinux policy. They all failed in one of two ways. The first way they failed was to be deemed too inflexible to work with a large percentage of applications and consequently they never gained traction in use. The second was they failed was to gradually expand the abstraction to create flexibility until it eventually ended up as complicated as the underlying system it was supposed to abstract. I'm not sure where Apple will go in the long run, but I see elements of both in the current entitlements system.
Why are we doing this?
It's clear that the sandbox mandate will have a huge effect on applications, but is it really going to help the security of Mac systems? I can easily make a case for why confining Linux applications with sandboxes makes sense. Linux is frequently used in server systems, and the applications running on those servers are constantly under attack. Web servers, mail servers, etc. regularly have exploits run against them. Confining these applications can protect them and their underlying system.
Mac apps, on the other hand, are not under attack. The few attacks that exist target the operating system itself or the web browser. Confining third-party applications won't protect against these attacks. If Apple manages to lock down OS X itself well enough that attackers start to target third-party apps, this move might make more sense. However, today the obvious attack vectors in OS X are all going after Apple software, so attackers won't waste their time with third-party apps.
My first recommendation would be to do away with the mandate. There's just not enough of a reason to do this right now. Computer security is all about determining risk and appropriate responses. The risks are too low to justify this large a mandate.
I assume Apple won't take my first recommendation, so I have a fallback. I don't believe the entitlements mechanism will ever be flexible enough for a huge percentage of Mac apps. So, I'd love to see an option to forego entitlements and instead write your own sandbox policy. This would give developers the flexibility they need. Apple could easily create automated policy analysis tools to flag developer policies that were dangerous or against the App Store guidelines. It would be more complicated for developers that chose to write their own policy instead of using entitlements, but that's better than being forced out of the App Store entirely.
Saturday, June 11, 2011
I thought it might be fun to document my workspace to show off the glamorous life of working from home. So, this picture above is my desk. It's located at one end of our dining room. The rest of the dining room has a dining room table covered in Jen's sewing/craft stuff. The dining room has no doors, but we've currently got baby gates on the doorways to keep the girls from wandering in.
The desk is an Ikea Galant, with the smaller 24" deep top to conserve space. On top is my MacBook Pro resting on a Xbrand Height Adjustable Laptop Stand. The laptop is connected to an IOGear KVM. This is connected to a 22" Dell 2209WA monitor, a full-size USB Apple keyboard, and a Logitech Trackman Wheel trackball. On the right is my iPad 2 standing in a Walnut Ledger stand from Stand This Up. And, of course, a glass of iced tea (sweetened with Splenda) and my Fidelity Custom earphones (for blocking out the sounds of the house since the room has no doors). You can also see a corner of the old PowerMac G5 on the bottom right, which I keep around for testing Pear Note on PowerPC and Leopard.
The blue thing in front of the keyboard is the exercise ball I've started sitting on. I like it, and plan to continue using it. It is definitely helping my posture and strengthening my lower back. I can't quite make it through a full day sitting on it yet, but I'm getting pretty close.
Choosing an everything bucket
Friday, March 25, 2011
Up until now, I haven't used an everything bucket (though I have talked about them a bit in contrasting them to Pear Note). I never saw the use for them. Like Alex Payne, the filesystem was plenty for me. Why would I need an application to organize my stuff when that's exactly what the filesystem already does?
Recently I rethought this position, though it wasn't on purpose. I was struggling keeping track of helpful docs/references I kept finding. Most of these were blog posts, though some were PDFs. Up until this point, I'd just been bookmarking the web pages in Safari and storing the PDFs in a folder on my local drive. This led to some problems:
- Unsearchable bookmarks - I'm a big fan of using search to find things. Searching local bookmarks is not very useful, as you're just searching the title and URL, rather than the contents. Consequently, I usually ended up searching the Internet for something I'd already found and bookmarked.
- Websites go away - Occasionally, a useful resource will go offline temporarily or permanently. Bookmarks don't store content, so I lost that content.
- Two places - Storing resources in multiple places sometimes meant I spent way too much time looking in one place (e.g. my bookmarks) only later to realize that what I was looking for was in a PDF on my local drive all along.
My first thought of how to solve these problems was to stop bookmarking locally and start using a fancy bookmarking site like delicious. Given the recent doubt about delicious's future, I looked at some of its alternatives. Pinboard seemed like the best choice, and they even had an option to archive all the things you bookmark for $25/year (solving #2 above). I was about ready to pull the trigger on this, when I remembered those old everything buckets.
The problem I had with using Pinboard for this was that it could only handle things on the Internet. Anything else that I create, have locally, or get privately, would be unreachable. So, I'd still end up with things in multiple places. If only there were something that could store anything... Perhaps I had a use for an everything bucket after all. And what better way to pick one than a good old bake-off.
- Exists everywhere I care (Mac, iPhone, iPad)
- I really don't like the UI on any of the devices, especially not the Mac
- Primary means of saving web content is "web clipping" which is clunky and not what I'm looking for
- Free account is very limited in file types, space, file size, and where it will search
- Pro account is still limited in file size
- Pro account costs $45/year, which would add up
I actually didn't look into Shovebox. I make it a policy to only introduce software into my workflow that is actively being developed. Shovebox hasn't been updated in 1.5 years. So, it was out of the running. Too bad, as I already had a license from a bundle they were a part of.
- Has an iPad app (though it is currently read-only)
- Very polished and put-together. Best of the bunch in this regard.
- Supports archiving URLs
- Workflow for URL archives is odd, as the original URL is only present in the comments field
- App activates every time you drag something to its dock icon (I have no interest in using the drawer)
- Well put-together, though not as polished as Yojimbo
- Supports archiving URLs and makes it easy to get to either the page on the web or the archive of it locally
- No iOS app
In the end, I chose Together and am quite happy with it. It's very similar to Yojimbo, but I preferred some of its workflows and features. Now, when I want to keep a resource around, I just drag the URL or file icon to the Together icon in my dock. It has a nice interface for searching to find things when I need it, and stays out of my way when I don't. I do occasionally wish it had an iOS or web component for those times when I'm on the go and someone tweets a link to something useful, but I can make do without that.
Wednesday, February 9, 2011
I love backup. I know that sounds weird, but I love backup. Before most information was digital, backing it up was incredibly hard. Consequently, the data of our lives was always in danger of being destroyed. Now that most data is digital, we have the ability to back it up so that it can survive extreme situations.
I grew up in Slidell, Louisiana, just outside of New Orleans. My parents still lived there when Hurricane Katrina hit. They got out and were safe, but their house flooded with about 3 feet of water. Like most families, they had books, papers, and pictures of my childhood throughout the house. Much of these were lost. I don't want to risk losing pictures and video of my kids or the code that I've spent years working on.
The good news is that a huge portion of my life, both personal and professional, is digital. I can backup this digital data and be comforted that it could survive flood, fire, theft, or hack. So, I love backup. And because I love backup, I want to do it well.
My system of backing up data includes two main backup locations. The first is local, complete, and high integrity. The second is remote (in the cloud). I'm going to talk about what I do first, and then talk about the principles behind this setup afterward.
I have a backup system running CentOS Linux. In addition to the little system drive, it has 5 1.5TB drives in a software RAID-5 array, giving 5.4TB of usable space. Each night, I have a cron job/launchd job on my systems run rsync to create a new snapshot on the backup system. Each snapshot is a complete backup of the system, but I use rsync with the --link-dest option, which means unchanged files will just be hardlinks to the already existing file rather than a new file. This saves disk space, allowing me to keep lots of snapshots around.
Each day, after the backups are complete, a python script run from cron on the backup system processes the snapshots. I keep the last 7 days, one snapshot from each of the last 12 months, and one snapshot from each year since I started this. The script will remove old snapshots that aren't in this list.
To protect one system's snapshots from another, and prevent snapshots from being modified after the fact, I have a custom SELinux policy on the backup system. When each client system logs in, it gets a unique SELinux type. SELinux policy only permits it to backup to its backup directory, meaning it cannot mess with another system's backups. Also, each day when the server processes the backups, it relabels the new snapshots to a type that can only be read and linked to. So, subsequent backups cannot modify the contents of a previous snapshot.
I use CrashPlan Family for remote backup. I run the CrashPlan agent on all my machines (even headless ones) and backup everything to the cloud. It can certainly take a while to get 4TB up there initially, but the service runs on Mac and Linux and does not have restrictions on silly things like file size like many of its competitors.
Backup data needs to be live
Many people use backup strategies that involve making a backup and storing it away. I used to do this, but stopped for two reasons. First, media can degrade, and if it's not live there's no way to detect this. I've burned CDs with backup data, only to find out years later that they had degraded and could no longer be fully read. The second reason is that standards change. I used to back things up to a Zip drive, only later to discover that I couldn't use these backups any more because no one (myself included) used Zip drives any more.
Backups should be automated
I don't clone to an external hard drive as part of my backup system (though I will occasionally use SuperDuper to clone a drive before I make a drastic change). I've tried to do this, but it's way too hard to remember to plug the drive in. So, I do only network backups. They just happen, and I don't have to take any action (though I do get emails saying it's been done).
Keep lots of versions
My backup is a bit of an archive as well. This is because I've found that the largest threat to my data is me. I delete things from my systems, thinking I don't need them any more. I'm often wrong, only to discover years later that I really did need that thing. So, I think it's important to keep a lot of old versions, which is one reason I don't ever delete my yearly snapshots. This is one reason I have local backups in addition to the cloud. No cloud backup service I've found will keep this many versions for this long.
Think about attacks
When planning backup strategies, people usually think about hard drive failures or natural disasters, but they often forget about the possibility of an attack. If someone successfully attacks your computer and corrupts your data, you don't want them to be able to alter your previous backups. This is a problem for simple solutions such as an external hard drive or a network share. If an attacker can corrupt your system, they can also corrupt your backups.
Most people backup things selectively. They backup only their documents, or specific directories, or perhaps their home directory. My advice - backup everything. That means everything on your drive(s). Why? Do you need everything? No. However, I've found that I always miss something in trying to list what needs to be backed up. If I select certain directories, inevitably I will add another directory later that needs to be backed up and forget to do so. Backing up everything solves this.
You may say that I'm wasting lots of space backing up things that can easily be reinstalled. System software and applications can be grabbed from install DVDs or the Internet, so I shouldn't waste the space. Well, if you're like me, your user data that needs backup is far, far larger than this. I have 4TB worth of used storage across my systems right now, Of that, less than 100GB is system software and applications. I could avoid backing this up, but I'd rather be sure I've got everything and use an extra 2.5% of storage.
Think about restoring
Backup to the cloud is outstanding, but it is slow. I average one dead hard drive per year between my systems. A system with a lot of data would take weeks to restore. CrashPlan (like other similar services) does option to overnight me a hard drive for restoring, but it's quite spendy. Local backups mean I can get back up and running much quicker.
Think about who controls your data
While it's important to trust any backup provider you use to treat your data appropriately, I'm more worried about them going out of business. If CrashPlan goes out of business, then those backups are gone. This is yet another reason I have local backups as well. If CrashPlan goes away, I still have some sort of backups while I get a new cloud backup solution going.
Yes, I'm crazy
I realize that this may all seem crazy, and it certainly is somewhat extreme. Hopefully this has inspired you or caused you to think a bit more about backups. If you're not backing up today, you need to start. The easiest thing to do is just sign up for CrashPlan or a similar cloud backup service. It's easy, relatively cheap, and can survive fire, flood, and most stupid mistakes. And remember, the ability to backup is a great attribute of the digital age, so be thankful for backups.
Avoid Verizon Wireless
Thursday, February 3, 2011
Yes, I know the tech world is excited about the iPhone coming to Verizon. Many have proclaimed that Verizon is the best network in the U.S. In general, Verizon defenders claim better coverage and fewer dropped calls as what makes the network the best. This obviously is dependent on where you are (as Jason Snell points out in his very reasonable Verizon iPhone 4 review). For me, one of the reasons I was happy to switch from Verizon to AT&T a couple years ago was because I had so many dropped calls on Verizon. AT&T has been much more reliable for me. And AT&T and T-Mobile both best Verizon in data speeds throughout much of their 3G coverage area due to their 3G technology choice.
But I wouldn't tell you to avoid Verizon because of any of this. These are minor issues that will vary from one area to the next. You should avoid Verizon because of how they're going to "optimize" your wireless Internet connection. You may have missed this today, as many news outlets focused on the other half of the Verizon document, which regarded throttling. It seems people were surprised that Verizon's unlimited data plans were actually going to be limited. While that may be annoying, I take bigger issue with their optimizations of network traffic.
As BGR reported, Verizon is both throttling users and optimizing content. What does optimizing content mean? It means that when your web browser asks a website for an image or a video, Verizon is not going to give you what you asked for. Instead, Verizon will convert it to something that uses less bandwidth and send that to you. Transforming images and video in this way is lossy, which means that it will be lower quality than the original.
This is unacceptable. First of all, I don't want to spend a bunch of money on an iPhone 4 with a beautiful Retina Display only to have Verizon make images and videos look worse. Apple and other smartphone vendors have made a big deal of how a modern smartphone gives you the Internet in your pocket. Not the mobile web, but the real web. Verizon's move is a step backwards in mobile devices. Verizon is trying to take them back to being second-class Internet citizens.
Even worse, Verizon is going to do this at the network level. That means that even non-mobile devices will suffer from this optimization. If I tether my laptop to my phone to get online (which is one of the headline features of the iPhone 4 on Verizon), I get to access the Verizon-optimized Internet. So, if I then go to Flickr to download an original image to print it, I'm likely not going to get it. I'm going to get a lower quality image.
There are plenty of reasons to choose a wireless provider other than Verizon (slow speeds, no simultaneous voice and data, monthly cost, throttling, history of overcharging), but the biggest reason to avoid them is their decision to filter the Internet. My Internet provider is responsible for delivering things from the Internet to me. They should not change things en route. They are a delivery service. I want them to change data coming to me just as much as I want the postal service to open my mail and just send me a summary of what was coming instead of the original letter.
Google makes web video harder
Tuesday, January 11, 2011
Today, Google made publishing video on the web harder. They announced that they will stop supporting H.264 video in HTML5 <video> elements in Chrome in the next couple months. As someone who publishes video on the web (and writes a tool that publishes video on the web), choosing a codec is hard. The best choice today is H.264. It's the only codec that will work on most mobile devices. It's the only codec with hardware decoding support (which is especially important for underpowered, battery-strained mobile devices). Up until now, you get HTML5 support in Safari, Chrome, and IE9.
What really frustrates me is that Google seems to not care about the web developers of the world. Google added HTML5 <video> support and the H.264 codec to Chrome in October of 2009. They announced its removal less than 15 months later. I'm fine with advancing standards, but adding support for something and then removing it that quickly is too much churn. Web developers don't need to rewrite their sites every year to adjust to what Google wants to support this year. If they choose to add something, they need to keep it for a reasonable amount of time, then deprecate it for a reasonable amount of time before removing it. 15 months feature lifespan followed by a "couple months" of deprecation is not sufficient.
I find it funny that this move will not only hurt those outside Google, but those inside as well. Youtube now must choose between supporting their own browser, supporting most mobile devices, or doubly encoding their videos to support both. I'm really interested to see what the Android team does. Google disapproves of H.264, but it's the only codec with hardware decoding support in mobile devices. Do they drop support for H.264 as well, leaving them with no way to play HD video and enormous battery drain from decoding low-res videos through he CPU alone? Or, do they stick with H.264 despite their desktop browser abandoning it?
Google, please get your act together. I know you're not big on common vision and working together within your organization, but this is getting ridiculous.
Choosing not to worry about piracy
Friday, January 7, 2011
There's been a lot of talk amongst the Mac developer community the past couple of days about preventing piracy in apps found on the new Mac App Store (e.g. here and here). Much of it has implied that developers who chose not to implement receipt validation were dumb or lazy. There hasn't been much argument against that point, likely because developers don't want to publicize the fact that their app can be copied without being purchased. Well, I'll step up and say it:
Pear Note on the Mac App Store does no receipt validation.
This was not done because I'm lazy or dumb (well, I guess you can be the judge of whether I'm dumb). It was a conscious decision done for specific reasons.
When I released Pear Note, I created a license verification mechanism as most developers do. This is a good idea for any app that's freely downloadable on the Internet, as it provides a mechanism to check if a user has purchased the software and encourage them to do so if not. Like most other devs out there, the crackers didn't take long to find me. They released cracked copies of Pear Note for every version I released within hours of my release. I fought back by obfuscating my validation code, but they found ways around it. Eventually I resolved to stop fighting. It wasn't worth my time.
I could have fought harder, but I couldn't have won. The crackers had the advantage. They had full control of the system running my software. All I could do was hope that I'd hidden things well enough that they wouldn't see what was really going on.
The good news is that it was painful for anyone who wanted to use the cracked version. Every time I released a new version (which is fairly often), they'd have to visit one of the scarier neighborhoods of the Internet to find a new cracked version. Legitimate users got automatic updates. I was ok with this compromise, as I doubt anyone willing to endure that pain would ever be willing to pay for the software in the first place.
Mac App Store receipt validation has the same problems as any other license validation. Worse, it's the same basic mechanism for all Mac App Store apps, making it easier to create a tool to crack them. I'd guess we'll see a tool in the wild soon that will be able to crack almost any Mac App Store app. (My bet is that they create their own certificate to sign a fake receipt, then binary patch an app to replace the string for the Apple root CA with the string for their own.)
The same good news from my license scheme applies to the Mac App Store. Apple is authenticating users on the server in order to give them updates. This means cracked copies won't be updated. Regardless of whether you do receipt validation or not, cracked copies won't be updated.
In my opinion, that's enough pain to prevent most honest users from pirating Pear Note. And I don't have to fight a losing battle with the crackers of the world. There will be some pirates, but probably no more than I've had before. And who knows, perhaps some of the pirates that do copy Pear Note are my future customers.
Tuesday, September 21, 2010
My name is Chad, and I have a problem. I've become addicted to lenses. OK, perhaps addiction is a bit strong of a word, but it's definitely a fun hobby.
A couple years ago, I stepped up from point and shoot cameras to my first interchangeable lens camera. Rather than get a DSLR, I got a Panasonic G1, the first Micro Four Thirds camera. Micro Four Thirds cameras are have no mirror like an SLR, and instead have the sensor directly behind the lens. This means that they can produce SLR quality images (due to larger sensors and nice lenses) while also offering many of the features of point and shoot cameras (real live view, face detection, etc.) in a package smaller than any DSLR. I've since upgraded to a Panasonic GH1, which adds some of the best video around.
It didn't take long after getting the camera that I started to add lenses to add different functionality. One of the really cool things about the Micro Four Thirds format is that it can accept almost any lens ever made due to its short flange-focal distance. So, while I have 3 Panasonic lenses that are great and have all the bells and whistles, I've also started getting some legacy manual lenses to offer different capabilities. These have no electronics to them, meaning you have to manually set the aperture and focus them, but they can be really fun to use.
So, what have I got now? From left to right above:
- Olympus WCON-08B 0.8x wide angle conversion lens - fits on the 14-140 to make it more like 11mm at the wide end
- Panasonic 20mm F1.7 prime lens (Micro Four Thirds) - incredibly sharp and very fast, used more than any other lens here
- Canon 50mm F1.4 prime lens (Canon FDn) - very fast portrait lens for under $50 used
- Panasonic 14-140mm F4.0-F5.8 super zoom lens (Micro Four Thirds) - not very fast, but quite sharp, lots of reach, and almost silent autofocus (which is nice for video)
- Vivitar 135mm F2.8 prime lens (Canon FD) - a bit faster lens at this length, and cost a whole $8
- Panasonic 45-200mm F4.0-F5.6 telephoto zoom lens (Micro Four Thirds) - more reach and faster at a given focal length than the 14-140
- Vivitar 200mm F3.5 prime lens (Canon FD) - faster at 200mm than the Panasonic, and cost a whole $9
- Canon 300mm F5.6 telephoto prime lens (Canon FDn) - lots of reach, at 600mm 35mm equivalent (due to the 2X crop factor) for a cost of only $27
Do I need all these lenses? No, but it's a fun hobby. Now that I've started to get into the legacy prime lenses, it's not even very expensive. It's certainly become a great way to learn more about photography, and I'd highly recommend it to anyone who's got a photographic itch to scratch.
Friday, July 16, 2010
One of the things that's bugged me about Pear Note for a long time is that search took way too long. Search is one of those things that should be instant while you're typing. The reason it was slower was that it was querying Spotlight as you type. Spotlight has gotten faster through the years, but it's still not in the league I wanted for speed. So, I started trying out ways to speed this up. Soon, it became apparent that the way to go was to keep my own database of Pear Note documents to search, rather than rely on Spotlight on the fly.
The obvious choice for a local database on the Mac would be to use Core Data. I've never used it before, but it's supported by Apple and would likely suffice. However, a few months ago, I attended NSConference in Atlanta, and heard Aaron Hillegass talk about his new project, BNRPersistence. I'd previously heard Aaron talk about some of the issues with Core Data on a podcast, and at NSConference Aaron unveiled this project to do something about it.
BNRPersistence is a set of local persistence classes built on top of Tokyo Cabinet and Tokyo Dystopia. Tokyo Cabinet is one of the no-SQL databases that are gaining popularity these days. Aaron demoed BNRPersistence at NSConference and I took one thing away from his demo - it's crazy fast. Orders of magnitude faster than Core Data.
So, I set out to try both BNRPersistence and Core Data out. That's when I discovered BNRPersistence's next major advantage - it's really simple. By the time I had started wrapping my head around managed object contexts and data models in Core Data, I had everything working in BNRPersistence. Objects being stored just need a pair of methods (very similar to complying with NSCoding), and almost the entire API for interacting with the object store is described in the short README. Once I had things working, I confirmed that indeed, BNRPersistence is crazy fast. So, I stopped delving into Core Data and started integrating BNRPersistence into Pear Note.
It definitely does show its immaturity at times, so there are some drawbacks:
- It's not thread safe at all, so you have to make sure you lock appropriately or stick to a single thread for store access.
- There aren't many convenience methods, but that contributes to the simplicity of the API.
- Getting Tokyo Cabinet and Tokyo Dystopia installed and usable for multiple architectures was a bit of a pain, but I'll take a bit of administrative pain if it makes the code simpler and easier to write.
- Tokyo Cabinet stores have a very large disk footprint for what they store. For me, this wasn't a big deal as I'm only storing a search cache in it, but I'd have to do some testing before using it for a document format.
All that said, I think BNRPersistence is a great project and am happy I chose to use it. Once Pear Note 2 comes out in a couple weeks, you can all see it in action and judge for yourself.
Premium computers are not dying off
Saturday, May 1, 2010
This post is a response to this post by Charlie Stross and the discussion that it has prompted across the Internet. I don't entirely understand why people across the web are nodding their heads in agreement with this. I presume that most people are struggling to understand some of Apple's recent moves, and any explanation will do. That said, this line of thinking should be strangely familiar to all of us from a few years ago.
Charlie's basic points seem to be:
- The future of computing is Software as a Service (SaaS), content on the Internet, and data in the Cloud
- Because of this, desktop computers will become a commodity market, margins will disappear, and hardware will become much less profitable
- Desktop computer makers (e.g. HP, Dell, etc.) are doomed
- Apple is in even worse shape since they make premium computers and no one will want to pay for premium hardware to access the same Internet services
- Apple has decided the way forward is to stop focusing on hardware sales and instead create an ecosystem where they control access to content, since that's where the money is
Anyone else feel like it's 10 years ago? All the money is going to be in SaaS, and hardware will be a commodity market like electricity. Everyone will only care about getting the cheapest computer possible to access all the wonderful content and services. This shift keeps getting predicted, despite the fact that the real world seems to indicate the opposite shift.
You need look no further than Apple's latest quarterly results to see that Apple's focus is on making money on hardware. Mac and iPhone sales are way up, and that's been the story for some time. These premium computing devices that are theoretically dying off are becoming more and more popular.
You can see this around you as well. More and more people are paying attention to what devices they purchase and choosing to spend more money to get something they deem better. Sure, there will always be a market segment that just wants whatever is on sale at Best Buy, but the segment that is willing to pay for something nicer is the one that is growing. That's why Apple has been so successful over the past decade. More people are choosing to pay for a premium product.
Apple knows this, and that is why they are moving into more and more hardware segments (phones, tablets). They want to be the premium brand in all these segments that discerning customers choose. Content and software are mostly important because they help drive hardware sales (though they're happy to take money there as well).
Apple doesn't want Flash or other cross-platform tools used with the iPhone because their worst nightmare is a world where the user experience on an iPhone is no different from an Android phone. That would take away their ability to charge a premium for their hardware. It really is that simple. Imagining that this is part of a larger shift in Apple from making money selling their hardware to making money on content/software distribution is just an effort to rehash failed predictions from 10 years ago.