Are you like me? Do you find yourself checking your Facebook news feed regularly and with ever increasing frequency? When you see a good movie, or take a cool photo, or experience something unique, is your first thought “I need to write a status update about that”?
One of the reasons why Facebook is so popular is because it gives us a little dopamine hit every time we find something we like. It’s a bit like fishing — hours of idle time can be justified by those few exciting moments precipitated by a fish tugging on your line. There’s an even bigger hit waiting for active posters: for many people in the 21st Century, the Facebook “like” button has become a surrogate source of validation, commiseration, therapy and love.
Whenever you find yourself indulging in repetitive behavior there are two important questions you should ask:
Am I enjoying this behavior?
Is this behavior making me a better person?
As I think about my Facebook use, I realize that my enjoyment of the experience has declined over the years as it’s become increasingly automatic and addictive. There is without question some high quality material on Facebook, however, the low signal-to-noise ratio means I need to spend a lot of time looking for those gems.
That brings me to my answer to the second question: if anything, Facebook has made me a less interesting person. Because instead of reading books or blogs or taking an online course, or getting out in the real world and actually talking to other people, I’m wasting a non-trivial amount of my time sifting through the minutia of everyday lives.
I’m not going to get into the privacy issues or the abysmal user experience or a bunch of other technical reasons why I dislike Facebook. Based solely on my answers to those two questions, I think it’s time for me to move on to pursuits that I enjoy more and that make me a better person. I plan to read more, write more on this blog (which I’ve neglected since starting a new job in 2011), and spend more time with my wife and daughter.
To my Facebook friends: au revoir, mes amis! You can continue to follow my random thoughts and ideas right here at http://marc1.org. And to all my friends, virtual and real: may you find what you’re looking for in 2013.
Imagine you’re about to open a candy store. You’ll want your shelves to be overflowing with enticing treats so you’ll need to invest in some inventory. But candy has a finite shelf life so if you end up stocking too much inventory, you can lose a lot of money on unsold candy. Any business that sells physical goods faces this dilemma: you want enough inventory to match your customers’ demand – any more and you pay the cost of excess inventory; any less and you lose sales opportunities.
A physical bookstore faces the same dilemma. Although they don’t spoil the way candy does, books also have a shelf life because authors, subjects and genres ebb and flow with the tides of fashion and culture. Now think about a digital bookstore. Because digital media is so easy to copy, the inventory cost to sell one copy of an ebook is essentially the same as the inventory cost to sell a billion copies. Any media that can be stored and copied digitally has a huge economic advantage over the corresponding analog incarnation.
That, in a nutshell, is why printed books and traditional bookstores are not long for this world. It’s already started happening. Earlier this year, Amazon.com announced that, for the first time ever, they had sold more electronic books than their paper counterparts. Borders, the venerable chain of book superstores, declared bankruptcy this year.
I will miss the look and the feel of paper books and the excitement of browsing shelves full of mystery and drama and surprise. But at the same time, I look forward to a world where books are easier to find and transport, cheaper to buy, more fun to read, and more environmentally responsible.
Tomorrow I’m going on a trip for work, where I’ll need access to numerous programming books and related references. Instead of carrying this on the plane:
I’ll be carrying this:
Doesn’t that make more sense?
Twenty nine years ago, I started my first and, up till now, only job, at Bell Labs in Holmdel, NJ (the lobby of which is pictured above). Bell Labs was a magical place in those days, sort of like a cross between a corporate think tank and a Grateful Dead concert. There were really smart people everywhere, all sorts of clubs and activities and seminars and colloquia (I once got to see Steve Jobs, then CEO of NeXT, give a scintillating talk to a small audience, before he was bigger than God). The best part was that everyone dressed in jeans and t-shirts (and even shorts in the Summer). That was a big deal for me because I’ve always hated the idea of having to wear a tie to work every day. Bell Labs was a place where no one cared how you looked or how you dressed – you were judged only by your ideas and your attitude.
For someone interested in computer science, this a was fascinating time and place. In the basement, behind heavily fortified walls, were four huge, multi-million dollar IBM mainframes, all of which were kept busy around the clock by computationally demanding scientists and engineers. One of my first assignments was to write system programs for those mainframes in something called Basic Assembly Language, a low level programming language for IBM mainframes. Our developer tools were laughably primitive by today’s standards, but programming at that low level was a great learning experience. Plus I managed to crash one of those expensive mainframes all by myself. But it left me feeling convinced there had to be a better way to develop software.
During this era, some Bell Labs researchers (principally Dennis Ritchie and Ken Thompson) invented something so innovative and so revolutionary that it forever changed how people used computers. Unix and C were an epiphany for me: this was how operating systems and programming languages were meant to be. Forty years after it was invented, C is one of the two most widely used programming languages and Unix continues to influence generations of operating systems. Before long I got a chance to develop software in C on Unix systems and there was no going back for me.
Several years later, I was working at an R&D office in Columbus, OH, which was co-located with a giant factory and I noticed that every day around 4pm thousands of factory workers would line up like cattle by the exit gate, waiting for the clock to strike the top of the hour so they could punch out and leave work at the earliest possible moment. At the time, I was captivated by a software project. Though I was paid for a nominal forty hour work week, at the end of the day I couldn’t tear myself away from the office and I regularly worked nights and weekends, just because I wanted to. So when I saw all those workers who couldn’t wait to leave their job at the end of the day, I realized how lucky I was to have a job I loved so much that I didn’t want to go home.
Over the years, I’ve been fortunate to have worked with many smart and interesting and kind people and that experience has taught me a great lesson. If I were asked to give one piece of advice to a young person just starting out, it would be this: Always try to surround yourself with greatness, because great people will challenge you and inspire you to be like them.
Two days ago, the following window popped up on my laptop, reminding me it was my last working day at Bell Labs/Lucent/ALU:
At the moment, I was immersed in some code, still trying to write the best software I could, right down to my last hour (I actually worked overtime on my last official day :). Next month I start a new job with Google in Seattle. For me, it feels like coming home, returning to a place frequented by brilliant, unconventional and interesting people, a place where you can dress any way you like, and a place where people are judged not by how they look but by the quality of their ideas. I can’t wait. And, thankfully, I still won’t have to wear a tie.
Humans have two very strong impulses (among others): the need to be part of a group and the need to seek out new and interesting information. When we spend time with our families, join a club, or go to a party, we’re feeding that need to belong. When we read a book or browse the web, we’re feeding the need for new information.
Facebook facilitates sharing information within social groups. It’s a very powerful concept, one that appeals directly to the human need to socialize. In fact, I believe it strengthens group connections, which is why we see so many old friends and classmates reconnecting on Facebook.
What about the other need I mentioned – the need to find new information? That’s where Twitter comes in. Twitter is many things to many people but the primary value I see in Twitter is the ability to follow the thoughts and ideas of some of the world’s most interesting people. Whether you’re interested in news, sports, science, technology, or the latest comings and goings of Lady Gaga, Twitter has proven a remarkably timely and powerful source of information, usually beating the major news organizations to the punch.
Each service’s dominant usage model reinforces its unique value: Facebook users tend to focus on two-way “friendship” relationships, which facilitate group interactions, while Twitter users tend to accumulate one-way “follower” relationships, enabling them to monitor people they find interesting.
The newest arrival on the scene, Google+, implements a hybrid model: Google+ users can establish one-way follower relationships as well as bi-directional friend-like relationships. In this way, Google+ offers the best of both services. At the same time, it’s innovative design overcomes some critical shortcomings in both services (e.g., Five Things I Hate About Facebook).
So when my friends ask me what Google+ is all about, I like to says it’s basically everything you already like about Facebook and Twitter, plus better usability and a whole lot more cool stuff I haven’t even mentioned (like circles and group video conferencing). And for people like me, who’ve gotten used to reading and posting on two completely different services, all that goodness is now available in one place – that may be the biggest deal of all.
I’ve always appreciated an old bumper sticker, which was particularly popular and relevant during the Bush (Jr.) years: “If you’re not outraged, you’re not paying attention”. When I think about Facebook, a variation on that theme comes to mind: “If you like Facebook’s user interface, you’re not paying attention”. In this article, I’ll explore five important design flaws in Facebook’s user experience and comment briefly on how Google+ deals with each.
You’ve entered a status update and sometime later realize it contains an error. You now have two choices, neither of which is very satisfying: delete and re-create the update (in which case you lose any accumulated comments/likes/etc.) or comment on your own update with an awkward, after-the-fact correction. Why can’t I simply click an edit button and fix my update?
The only counter-argument I can imagine is that someone could change an update after others have commented on or liked the original version, which could be abused in various ways. But we already face that pitfall in many places. For example, I can post an article on this blog, gather comments on the original version and then deceptively update my post later. Thankfully, blog software designers understood that preventing a small minority of people from abusing a feature doesn’t justify denying valuable functionality to all users. [Google+ got this right - status updates (and comments!) can be edited any time, even after they've been posted.]
Facebook has its own independent messaging system, separate from any existing email system. This means that the email system I know and love, which has all kinds of great features (e.g. it does good things with conversation threads, it’s tightly integrated with my mobile device, etc.), along with years of my previous correspondence, is unusable within Facebook. This also means I end up with conversations being recorded in two different places. When I want to find an old message, I need to figure out whether it was part of a conversation that took place on gmail or Facebook. I may end up having to search both sites to find an item of interest.
In addition, Facebook’s messaging system lacks some basic functionality. The ability to forward a message was added only recently. Have you noticed that a message originally sent to a group can’t be replied to individually? Any replies go to all original recipients, whether you like it or not. And have you noticed there’s no notion of separate conversations or threads – every message is part of a never ending conversation with that particular recipient. [Email in Google+ appears to defer to your home email address/system. That's precisely how it should be - I already have an email service I'm happy with and I don't want my social networking services trying to duplicate or subsume that function.]
Pop quiz: go to facebook.com and see if you can figure out how to block a friend (not unfriend them, just hide their updates from your news feed). Let’s see, “Friends”, then “Manage Friend list”, then…nope. OK, how about “Profile”, then “Friends”, nope. Let’s try “Account”, then “Edit Friends”, no, that’s not it. After googling “hide a friend on facebook”, I found the answer. Facebook is full of such navigational challenges. [Based on my usage so far, I'd say that Google+ offers much more intuitive navigation and organization.]
Have you noticed how facebook keeps introducing new features that affect how your news feed looks, or worse, how much of your personal information is shared with others? Most of us find out about these changes through a friend’s status update that usually goes something like this: “Hey everyone, I just found out Facebook changed our default setting for X to Y. Here’s how to undo that change.” Much has been written about Facebook’s track record in this area. This graphic makes the point more clearly than any words I can add here. [I don't have enough experience yet with Google+ to assess their treatment of privacy and opt-in vs. opt-out policies.]
Have you ever tried to find an old status update on Facebook? Here’s how it generally goes: click on username, scroll, scroll, scroll, click “older posts”, scroll, scroll, scroll, click “older posts” again, die of boredom. Is there any reason I can’t enter a search string to find my (or another user’s) old updates? [I don't see a way to search for old updates on Google+ either so this appears to be a shortcoming on both services. I'm interested to see which service adds this first. I'm starting my stopwatch now...Ready, set, go!]
Something amazing happened over the past twenty five years: all media has gone digital. There are many implications of this revolution but one of the most impotant ones is an economic effect: the cost of copying digital media is, effectively, zero. In the analog era, when our music was stored as bumpy grooves on a vinyl disk, unless you owned a custom record press in your basement, copying a record was no mean feat. Same story with copying a book: turn page, reposition book, press copy button, repeat a few hundred times – no fun for anyone. Thus, in the analog age copyrights were self-enforcing.
But here’s an interesting question: why, in the digital age, do I continue paying for my music? In the past ten years I’ve purchased more MP3 music via the internet than analog music in any previous decade of my life. I could have borrowed those CDs from a friend or from the library and copied them for free. Why do I bother paying at all?
Though I like to think it might have something to do with ethics, there’s a better explanation: a transaction that used to involve getting in my car, driving to a record store, and physically handing someone my hard-earned cash has turned into an impulse purchase. I hear something that interests me, I click on a button and, Presto!, I own it. The very same technology that makes it easy for me to steal music also makes it incredibly easy for me to buy music.
Tim O’Reilly is a media entrepreneur who understands the digital world about as well as anyone. In this excellent short interview with Forbes, he shares his insights on digital rights management. This excerpt is particularly noteworthy:
Jon Bruner: On all your titles you’ve dropped digital-rights management (DRM), which limits file sharing and copying. Aren’t you worried about piracy?
Tim O’Reilly: No. And so what? Let’s say my goal is to sell 10,000 copies of something. And let’s say that if by putting DRM in it I sell 10,000 copies and I make my money, and if by having no DRM 100,000 copies go into circulation and I still sell 10,000 copies. Which of those is the better outcome? I think having 100,000 in circulation and selling 10,000 is way better than having just the 10,000 that are paid for and nobody else benefits.
People who don’t pay you generally wouldn’t have paid you anyway. We’re delighted when people who can’t afford our books don’t pay us for them, if they go out and do something useful with that information.
I think having faith in that basic logic of the market is important. Besides, DRM interferes with the user experience. It makes it much harder to have people adopt your product.
Times have changed. The companies that succeed will be those, like O’Reilly’s, that adapt to the digital world and figure out how to get great products into peoples’ hands quickly, conveniently and at a competitive price. Right now the record companies seem to be spending a lot of their time and energy trying to figure out how to put the genie back into the bottle. Good luck with that.
This summer I am again teaching my Introduction to Programming and Application Development course at the University of Washington PCE (Professional and Continuing Education). I’ve created a promotional presentation about this course using a neat tool called Prezi (this is the tool used to create those awesome TED talks with the flying slides). Click through the slides below to find out more about this course.
June 21 – August 23, 2011, Tuesday nights, 6-9pm
Downtown, 1325 4th Ave (4th & Union)
Python Programming for the Absolute Beginner, 3rd Edition, by Michael Dawson (ISBN 1-4354-5500-2)
I like your cloud music concept so much that I wrote a veritable love letter about it two nights ago. Since then I’ve spent a little quality time with your new service (I uploaded all of my 20GB digital music collection) and I’m still infatuated, but I now have some specific feedback for you.
Bulk upload is easy to use, easy to customize and works like a charm. Kudos on that. It’s also very slow (it took nearly a full day to upload my 20GB of music) but I’m guessing that’s by design – by not hogging my processor or network adapter, it quietly chugs along off to the side doing its thing, while I get real work done. That’s fine, I only have to bulk upload once so I can live with it being slow.
The web player is clean, intutive and surprisingly responsive for a web app – it seems to be in that category of well designed Ajax apps (like gmail) that feels almost like a desktop app.
Device support – not so good. You have to figure out a way to support iThings – this can’t be about Android vs. Apple. Trust me, I’m on your side, I want this service to succeed but I’m not dropping my iPhone or iPad just to access your cloud player. I understand that out of the box you don’t yet support iOS but you need to at least make some sort of statement so we know where you stand.
Love that you give a free 5GB for the casual user and, on album purchase, a free upgrade to 20GB for one year for the more serious collector. Even after the promo, a dollar a GB for backed up storage with ubiquitous access seems like a pretty reasonable price to me.
Not that you need my help on the marketing end but here’s a suggestion: in addition to the free 5GB, offer three free MP3 albums. Free cloud storage space is sexy only to nerds like me. Free music has much broader mass appeal.
When I buy music from your MP3 store, you force me to store it in one of two places: directly on my cloud drive or downloaded to my computer. Guess what? I want the “both” option. I’d like it dropped immediately into my cloud drive and I want a copy for safekeeping on my computer. Why? Because I don’t want my music locked into your service. If someone offers a better service, I’d like to be able to move my music to another cloud (it’s my music, after all). With the current setup, I need to download purchases and then upload them to my cloud drive, which is a hassle.
UPDATE: The cloud player supports automatic download (details here, see the “Setting Cloud Player to auto-download new purchases” section). Here’s the setting you want to look for: It would be nice if that choice could be made more explicit at MP3 purchase time.
In summary, nice work on a truly groundbreaking service. I’m a fan, but to keep me you’ve got to finish the job. Make it easy for me to play my music on my iPhone and iPad.
p.s. As far as I’m concerned, you needn’t worry about iPods. I’m betting some enterprising Andoid developer is already working on a wireless portable cloud player. I’m much more willing to ditch my iPod than my iPhone or iPad. BTW, a cloud player based on Whispernet (ubiquitous connectivity with no service contract) would be awesome!
Today Amazon.com announced something that is, to borrow a famous Steve Jobs-ism, insanely great. For a while now, Amazon has been the industry leader in so-called cloud computing (providing storage and computing resources via the web) with their Amazon Web Services (AWS). I could go on about how innovative and powerful AWS is but that’s a topic for another day. Today I want to talk about the new announcement: Amazon has come up with a creative way to merge their music download store with their cloud computing services.
Why is that a big deal? Think about how you normally work with music. You’ve probably downloaded a bunch of songs from iTunes. First of all, you’d better make sure you back up those songs because you’re only one disk crash away from losing your entire digital music collection. Secondly, you need to worry about moving copies of that file around. Want to play it on your iPhone? You need to synch it. Want to share it with your kid’s MP3 player? You need to do another synch. The point is that you didn’t just download a song, you took custody of a bunch of bits which you are now responsible for managing. I don’t know about you, but I’ve got enough things to manage in my life.
So how does this new service help? When you buy music from Amazon, instead of downloading it to your computer, it gets stored in what is essentially your personal music storage locker in Amazon’s cloud. Guess what? No more worrying about backups – it’s taken care of for you. Want to play that song on a smart phone or tablet? It’s immediately available to be played remotely from any device that supports Amazon’s music player – no more need to synch anything. You buy it – you play it, anywhere, from any device.
It’s important to note that in this initial release, the Amazon Cloud Player is limited to access via the web and a native Android app. Amazon doesn’t yet have a native app for iOS (i.e. a version for Apple products), but this is the very first version – Apple support is bound to be high on the list of early features.
In the past, I’ve written about why I never buy music from iTunes. My normal mode of operation for the past few years has been to buy all my music from Amazon in pure MP3 format but then store the tracks in my iTunes library so that I can synch it with my family’s various iDevices. As soon as Amazon comes out with iPad/iPhone/iPod support (or, in the case of iPod, a suitable replacement), then I’ll just store all my music in the cloud. Hopefully there’ll be a way to upload my existing songs en masse. Then my entire digital music collection will be backed up for me, automatically, and I’ll be able to access it anywhere I like, from any device I like. In the words of Jack Johnson, this is how it’s supposed to be.
Do you know what CAPTCHAs are? They’re those ubiquitous word recognition challenges that web services use to make sure you’re a human being. Invented by researchers at Carnegie Mellon University in 2000, CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”. Ticketmaster, for example, uses CAPTCHAs to prevent automated scalper-bots from buying up all the tickets to popular concerts.
It’s a pretty cool idea – it exploits the fact that computer software, advanced though it may be, has a difficult time doing something we humans take for granted: recognizing messy, ambiguously constructed text. The downside, of course, is that in order to stay a step ahead of the bad guys, over time CAPTCHAs have gotten harder to recognize by humans. Take this one, for example, with which I was just presented when signing up for a new service:
I can tell the second word is “urnice” but am I supposed to recognize that first word? As far as I can tell, it’s an ink stain.
Perhaps now might be a good time to admit that I impose CAPTCHAs on people leaving comments on this blog. I do so reluctantly to thwart a high level of spam comments, however, it annoys me to no end that in order to deter spammers, I have to make life more difficult for legitimate visitors. Recently I added support for integrated Facebook comments, which seems like a nice way to solve both problems (it’s convenient and largely spam-proof), the only downside being that I’m aiding and abetting Facebook’s inexorable march toward world domination.
In recognition of Twitter’s fifth anniversary, Robert Scoble published an historic circa-2006 video interview with the three founders of Twitter (see below). This video reveals the founder’s views, while the company was still in its infancy, on what Twitter was intended to be and how it was expected to be used. What I find most interesting about it is that, with hindsight from 2011, it seems apparent that Twitter’s creators really didn’t fully understand what they had created and how it would ultimately be used.
There’s a popular myth that inventors possess a laser-like vision of how their product or service will be used but, especially with disruptive technologies like social networking, creators usually have just a faint glimmer of an invention’s full potential (as is now widely known, Facebook started out as an online directory for college students). Ultimately, the party that decides how a unique technology will be used is the end user, who often creates new and interesting modalities that were never anticipated by inventors.
This is, at least partly, what makes modern technologies so exciting: there’s a grass-roots, crowd sourced, participatory aspect, which gives all of us a stake in the innovation process. Essentially, we are all inventors because we all help extend and enhance the ideas of the creators. I believe the most successful companies will be those who recognize and embrace that reality – companies that involve, engage and, to use Guy Kawasaki’s term, enchant their users by treating them a little less like customers and a little more like collaborators.
As my readership has grown over time, I’ve seen a corresponding increase in the number of “spam comments”. Spammers (most likely, automated “spam-bots”) routinely plant comments in any blog they can find with decent traffic. They include a link to their site in the comment (or associated data) because the number of links to a particular site is a critical factor in most search engine algorithms. This is an attempt to “game the system” and elevate their own site’s position in search results.
Here’s an example of a typical spam comment I received today:
Thanks a lot for giving everyone an extraordinarily memorable chance to read in detail from this site. It is often so brilliant and packed with a lot of fun for me and my office acquaintances to search your site nearly thrice in a week to read through the latest items you will have. And definitely, I’m certainly astounded considering the exceptional ideas you serve. Certain two points in this post are really the most beneficial we’ve ever had.
And here’s my response:
Thanks for your extraordinarily memorable comment! I’m glad to hear I’ve been able to provide fun for you AND your office acquaintances! I’m also flattered that you visit my site thrice weekly. Someday, I hope to inspire you to visit quadrice weekly. I removed your link because I wasn’t sure if it was appropriate for my readers – I hope that doesn’t make my exceptional ideas any less astounding!
I’m under no illusion that the source of this spam will ever see my response but it was fun to write nonetheless. :)
Like an infomercial claim (“It slices! It dices!”), this article’s title sounds too good to be true, but it is true – in one article, I’m going to explain how the web works and you will walk away a better informed human being. All you have to do is give me a few minutes of your time. Sound like a deal?
As the video above illustrates, lots of people don’t quite understand some basic things about the web, like what a browser is. A web browser is a program running on your computer (or smart phone, or iPad or…) through which you access the World Wide Web. The browser’s job is to make it possible for you to visit pages on the web. But what’s really happening when you use your browser to access the web?
Let’s start at the beginning…imagine you’re sitting in front of your web browser and you enter a URL (which is a fancy term for the address of a resource on the web). Let’s step through what actually happens when you press the enter key. The first step is that your URL gets parsed by the browser. Parse is a fancy term for dividing something into pieces. If you’re ever at a cocktail party with computer scientists, try to work the word ‘parse’ into the conversation and everyone will be very impressed with you.
URLs are formatted like this: “<protocol>://<server>/<path>”. Let’s take a look at a real URL and see how it gets divided into pieces:
the protocol: This is the “how” – it tells your computer which conventions to use when talking to the computer serving the requested page. In this example, the desired protocol is “http”, which is a special set of rules for requesting and receiving web content.
the server: This is the “where” – it tells your computer the name of the computer serving the requested page. In this example, the server is “www.npr.org”, which is the name for one or more computers operated by NPR.
the path: This is the “what” – it indicates which page you’re interested in accessing on the requested website. In this example, the path is “series/tiny-desk-concerts/“, which is the name associated with a particular page among many available at the NPR website.
For every URL you type, there is a computer just like your computer (although probably a lot more powerful), just waiting to respond to your requests. Actually, most popular web sites are served by large banks of computers called clusters, but to the outside world, such clusters operate like a single, very powerful computer, so it’s fine if you want to think about all the pages at npr.org as coming from one giant computer. These computers are called “web servers”, because they respond to (i.e. serve) requests from “clients”, like your browser.
Now that the browser has chopped up your URL into pieces, it can get down to work. The first thing it needs to do is establish a communication session with the requested server. But first it needs to figure out how to reach that server on the internet. I’m going to let you in on a little secret: inside the internet, when computers talk to each other, they don’t use the nice, human-friendly names we’re used to, like ‘espn.com’. They use boring-looking sequences of numbers, like 192.168.144.227. These numbers are called IP (Internet Protocol) addresses. Every computer on the internet, including the computer you’re using right now to view this site, is assigned a unique IP address. Would you like to know what your IP address is? Click on this link: http://whatismyipaddress.com/ and you can see your very own personal IP address, as well as some other information about your computer. You’ve been using your computer for how long? And you’re only now learning it’s real name!
Let’s say you want to call your mother. What do you do if you don’t know her phone number? (and, by the way, shame on you for that!) You look it up in the phone book. Or at least that’s what we did in the stone age when we had phone books – now you might look it up online. The phone book is a great analogy for what goes on when your browser wants to connect to a server it knows only by name – it needs to find the IP address associated with that name. The way it does that is by consulting a special resource called the DNS (Domain Name System). DNS is the internet’s “phone book”, so to speak. It’s how clients, like your web browser, convert a server name into its corresponding IP address.
Want to look up a name in DNS yourself? Visit http://www.webmaster-toolkit.com/dns-query.shtml and enter any server name you like. I just entered “www.npr.org” and found out NPR’s IP address is 184.108.40.206. Here’s another cool thing…your browser can use addresses just as well as names. Open a new browser window and enter npr.org’s address (or just click on this link: http://220.127.116.11). Your browser sees that and says “Wow, this user gave me an IP address so I can skip the hassle of looking up a name in DNS and just connect directly to the address provided”.
What happens next? Your computer makes a connection to the server’s IP address and the server accepts the connection, sort of like the way you call your Mom and she answers the phone. After the connection is established, your computer sends something called an HTTP request (more on this later) and the server does one of two things: if it can find the page you requested, it returns it in an HTTP response. If the server can’t find the page you requested, it returns a special “404 page not found” response, which we all see from time to time when we mistype a URL.
On the world wide web, computers don’t communicate with words, they use protocols like HTTP (HyperText Transfer Protocol). HTTP is a way to structure requests for web resources (and the corresponding responses) so that that they can be understood clearly and unambiguously by a computer. The request/response between your browser and the server is similar to this scenario: after your Mom answers the phone, you say “hey, Mom, can you give me your recipe for that delicious Fritos casserole?”. That’s very similar to an HTTP request for a particular object (the casserole recipe). In response, your Mom does one of two things. She might say “oh, sure, I have it right here – first you pre-heat the over to 425 degrees…”, which is like the HTTP response above. Or, she might say, “I’m sorry, but I can’t find that recipe, I must have misplaced it”, which is the human equivalent of a “404 page not found” HTTP response.
In addition to using a protocol to manage the transfer for information, the actual content that gets transferred and presented by your browser also follows a very precise format called HTML (HyperText Markup Language). Here’s an example of a very simple HTML document:
<h1>Here’s a picture of my dog:</h1>
<img src=”wp-content/uploads/2011/02/meiko.jpg” />
<p>His name is Meiko. As you can see, he is quite awesome.
Just to give a taste of what you can do with HTML, the <h1> and </h1> “tags”, as they are called, bracket some text to be printed as a heading, and the <img> tag identifies an image or a picture to be displayed. The <p> tag starts a new paragraph.
I’ve created a file containing the HTML document above at the path “/dog.html” on my server (mkcohen.com). I’ve set it up to use HTTP as the transport protocol, so putting those three pieces together, the entire URL for accessing my document above would be: http://mkcohen.com/dog.html. Go ahead, click on that link and see what happens (I’ve programmed it to open a new browser window so you won’t lose your place in this article). Here’s a review of what just happened:
You told your browser you wanted to visit a particular URL (http://mkcohen.com/dog.html).
Your browser parsed the URL into three pieces: the protocol (HTTP), the server (mkcohen.com) and the path (dog.html).
Your browser used the DNS system to convert the server’s user-friendly name (mkcohen.com) into my server’s internet protocol address (18.104.22.168).
Your browser made a connection to my server’s IP address.
Your browser sent my server an HTTP request asking for a copy of the HTML document stored at dog.html.
My server found the requested HTML document and returned it to your browser via an HTTP response.
Your browser received the response.
Your browser interpreted and displayed the HTML document contained in the response. At this point, you struggled to contain your joy as the magnificently handsome Meiko appeared on your screen.
Your browser dropped the connection to my server, terminating the session.
Of course, there’s a lot more to this story but these are the basic, fundamental things that happen every time you click on a link or visit a web page. You may not be ready to build your own browser but I hope you now have a better understanding of how the web works. Leave me a note below if you have any questions or comments.
P.S. No dogs were harmed in the making of this article.
Twitter’s 140 character limit raises an interesting question: how many tweets are possible before nothing new can be said?
Most of the information in tweets is conveyed via alphabetic characters and the digits 0-9. We should also include the ubiquitous hash mark and “at sign” (twitter tags and references, respectively) and several other punctuation marks, including the all-important space character. Given 26 letters (we’ll ignore upper and lower case), 10 digits and roughly a dozen special symbols gets us to about 50 unique characters. The total number of possible tweets can thus be calculated by raising 50 to the 140th power. What’s that look like? Here’s a cute google trick: enter any mathematical expression in the search bar and you get the calculated answer, as illustrated below.
This is a *really* big number. How big? For the non-exponentially inclined, that’s basically a 1 followed by 238 zeros. So that “twitter is running out of tweets” rumor (not a real rumor, I just made that up) is hereby debunked. To get a sense of the size of that number, let’s compare it to some other large quantities:
According to this wikipedia article, the number of stars in the observable universe is a mere 1 followed by 22 zeroes, which is not even a close match for the observable universe of tweets.
Per Wolfram Mathworld (by the way, is there a nerdier site on the planet?), the number of possible chess games is 1 followed by 40 zeroes. Compared to our twitter limit, that’s a minor league number.
OK, you get the point. But actually this drastically overstates the number of tweets. Why? Because any random combination of letters, while perhaps legal, would not be considered meaningful tweeting. Scintillating tweets like this one: “ertyus hbd fnio dfghjk bnm” would never be written by a human being, unless we’re talking about Sarah Palin. So how do we count only “meaningful” tweets? Let’s start with the assumption that meaningful tweets are composed of a number of english words and proper nouns (sorry rest of the world, this is the point in the article where I go all ugly American on you).
According to this article, there are roughly 170,000 commonly used words in the English language. Let’s exclude the words that only Ken Jennings knows and reduce that to 10,000, which, per this source, is thought to be the size of the average person’s vocabulary. Per this source, the average length of an English word is 5.1 letters. Let’s lower that average to 4 letters to account for popular abbreviations (“u r my BFF, LOL”). Next, we divide the available 140 characters by 5 (four letters plus one space for each word) and we get 28. Essentially, tweets are constructed by making up to 28 choices from a pool of roughly 10,000 words. This gives us an estimate of the number of meaningful tweets:
That’s a 1 followed by 112 zeroes. But we still have a problem – the vast majority of those would be entirely meaningless tweets, with no semblance of the rules of English grammar. In other words, they would look like a typical teenager’s text messages.
Even if we remove 99.99% of those tweets, the result is still a whopping number: 1 followed 108 zeroes, which happens to be close to a very famous number. A googol, the name of which inspired the differently spelled and far more profitable search engine, is defined as a 1 followed by 100 zeroes. Finally, we have our answer: The number of meaningful tweets, give or take a few trillion, is given by…a googol.
Of course, it really doesn’t matter how many tweets are possible. The whole point of this article was to show that, in the age of the internet, almost any quantitative question can be answered with a combination of googling, calculating and some good old fashioned thinking. I like pondering crazy questions like this one because I think it’s fun and a good way to build analytical skills. Now, go figure out something really important, like how many licks it takes to get to the center of a tootsie pop!
I’ve noticed a rash of a certain kind of virus on Facebook recently. Here’s an example of what it looks like (poster name and avatar obscured to protect the innocent):
The point of this trick is to get you to click on the link. Hence, these updates usually have an attention grabbing headline with a compelling picture. What happens if you click through? It replicates itself, i.e. it automatically creates a new status update sharing the same link with all of your friends, without your consent. A pernicious variation on this theme disables the comment capability so no one can warn upstream users.
How can you avoid this kind of nastiness? Here’s something that may help. If you hover your mouse over a shared link, nearly all browsers will reveal the URL (geeky term for a web site’s destination) associated with the link, usually along the bottom of the browser window. For example, here’s a status update with a link to a New York Times article:
My mouse is hovering over this update and, as you you can see, at the bottom of the window my browser is showing me the URL attached to this link. The URL starts with “http://www.nytimes.com/…” so I can be reasonably sure it’s legitimate.
As a general rule, the URL should be recognizable and should match the content. If the update purports to be a video, the URL should indicate youtube, vimeo or some other known video streaming service. A link to a Facebook photo should start with “http://www.facebook.com”. You get the idea.
Of course, this is not fool-proof, by any means, as people often post legitimate links to obscure sites you’ve never heard of. In that case, a dash of common sense helps. Is your 80 year old mother sending you an amazing story about Justin Bieber? Is someone you rarely communicate with suddenly offering you a free iPad? Don’t click on anything that doesn’t pass the smell test.
What should you do if you inadvertently fall for one of these tricks? At the very least, you should delete any status updates shared on your behalf by simply clicking the ‘x’ in the upper right corner of the update. You might also message the person who posted the update that tricked you, letting them know what happened, so they can remove their copy (they were tricked too and often don’t realize it). A good rule of thumb is, before you exit any facebook session, check your profile page and make sure there are no status updates that you don’t personally remember adding. If you see something fishy, delete it.
Another thing to be wary of – sometimes you’ll click on a legitimate link which takes you to a facebook page requesting access to your profile. Popular players in this genre are the “photo of the day” app and the “find out who’s been accessing your profile” app. The problem is there’s no way to know whether you can trust the app with your private data. Any time I click on something that asks for access to my data, no matter how intrigued I am by the potential service, I say thanks but no thanks.
Facebook is really cool for sharing info, pictures, etc. with friends but it’s also a virus writer’s dream because it’s very hard to tell when it’s ok to follow a link. The best advice is TBC – Think Before Clicking. Just like in the real world, if you have a bad feeling about something, skip it.
People sometimes ask where I find the music featured in my Song of the Day series. Back in the days when we listened to music by dragging a tiny stylus across a rotating piece of plastic, there were only two reliable ways to find new music: 1) listen to the radio and 2) check out your friends’ record collections. Nowadays, thanks to the internet, finding new music couldn’t be easier. Here are my ten favorite web resources for discovering new and exciting music:
Youtube – Youtube isn’t just a great site for videos – it’s the best jukebox ever invented. Search on any song title or band name and you’ll probably find more videos than you have time to watch. You have to do some digging to find the real gems, but if you look beyond the slick, record company produced videos, the pirated uploads set to boring still images and the shaky cell phone videos, you’ll find some truly amazing live music performances.
Twitter – Twitter is a great source of tips on new music. The trick is to find and follow people whose taste interests you the most. Follow your favorite artists and see who they tweet about. I’ve received some great music tips recently by following @fleetfoxes (Robin Pecknold), @Gibbstart (Ben Gibbard) and @Slowcoustic (Canadian acoustic music blogger).
Radio Paradise – This is a radio station streaming live on the internet. What sets it apart from thousands of other indie internet radio stations is the eclectic and diverse playlist, reflecting the impeccable taste of founders Bill & Rebecca Goldsmith. I never fail to hear something new and interesting whenever I listen to Radio Paradise.
Daytrotter – This site invites touring musicians to a barn in western Illinois for an impromptu live recording session. The results are posted in free, downloadable MP3 form and videos are often available as well.
Hearya – Another great site for live recordings of indie bands. Hearya is based in the Chicago area. Like daytrotter, all music is original and freely downloadable and features some video material. In addition to the live recordings, hearya is an excellent indie music blog.
NPR Tiny Desk Concerts – This is an excellent source of great live music featuring video performances behind a desk somewhere inside the NPR Music offices. Despite the cramped venue, something about this show brings out the best in visiting artists. The production values on these videos are consistently excellent.
Live From Daryl’s House – The concept is simple – every month a new artist comes to visit Daryl Hall (of Hall & Oates fame) at his home in upstate New York, whereupon a mix of songs from the artist’s and Daryl’s catalog are performed live. There are at least three good reasons you should check out this site: 1) it’s interesting to watch the creative process when diverse artists collaborate, 2) the range of guest artist/collaborators is rich and eclectic, and 3) Daryl Hall and his band have an uncanny knack for making everyone sound better.
KEXP – From their amazing playlist, to their fantastic live in-studio performances, to their huge archive of live performances (all downloadable from kexp.org, videos available on the kexp youtube channel), Seattle’s KEXP is simply the best indie radio station in America.
“Gonzo” Live Music sites – There are a number of sites featuring low budget impromptu live videos of lesser known artists. These sites are decidedly indie and very spontaneous in spirit. My favorites in this category are: Shoot The Player, Take Away Shows, Bandstand Busking, and Black Cab Sessions. The latter features artists performing live while jammed into the (moving) back seat of one of London’s famed black taxicabs.
Local Sites – A great way to keep abreast of local artists, live shows, record store appearances, etc. is to find the best local music blogs in your neck of the woods. My favorite sites for the local music scene in Seattle are: Three Imaginary Girls (who have the best Seattle music events calendar I’ve found anywhere on the web), Sound on the Sound and 103.7 The Mountain.
Once again, a computer beat us at one of our own games but a part of this story deserves more attention. At the conclusion of the last great man vs. machine contest, Garry Kasparov stormed off the stage after losing a six game match to IBM’s lethal chess computer, Deep Blue. Kasparov behaved like a third grader who’d just been knocked out of an elementary school chess tournament. To be fair, up to that point Kasparov had not had much experience with losing. Below is a short excerpt of the post-match press conference, in which Garry painfully tries to convince the audience he kinda’, sorta’, didn’t really lose. It’s not pretty. [Old nerds like me will excitedly note the presence of Unix co-inventor and chess software guru Ken Thompson sitting with the Deep Blue team.]
This week we saw another epic human/computer challenge. Again silicon triumphed over carbon but this time around the human reaction was quite different. Here’s how the greatest Jeopardy player in history reacted to his loss:
“I had a great time and I would do it again in a heartbeat,” said Mr. Jennings. “It’s not about the results; this is about being part of the future.”
In defeat, Ken Jennings was gracious and humble and he taught us all, especially our kids, a valuable lesson in sportsmanship. As I watched this fascinating contest, I was very proud of Ken’s performance, especially after the match was over.
I can’t say you didn’t warn me. As noted in an earlier blog post, I knew my days were numbered. But it was kind of sad to see my happy little search engine suddenly start spewing completely random results. In fact, here’s what it sounded like if you used AmaZoom the day after you dropped support for review data.
Anyway, it’s your ball – you can take it home whenever you like. But we were kind of in the middle of a game here. I’m sure you had strategic reasons for doing this but it sure makes your API useless for sites like mine, which try to provide a unique product search experience based on review data. I’ve put the below banner on my site to alert wayward visitors that, sadly, AmaZoom is no longer open for business.
Fortunately, my kid’s college fund does not depend on Amazoom (or any of my other hare-brained online initiatives).
There’s a fascinating new search tool recently made available by Google. It gives you the ability to freely search 5.2 million digitized books (over 500 billion words!) published between 1500 and 2008. Why should you care? It gives us unique insights into what people have been thinking and writing about over the past 500 years. Results are provided graphically so that you can get a glimpse of how a topic has trended over time. You can try it for yourself here.
Since 2004, Google has offered an interesting tool called Google Trends, which tracks the popularity of web search terms. That tool provides a view into the collective zeitgeist of millions of Internet users but the data is limited to just a few years. The new tool provides a much more comprehensive view of word usage going all the way back to 1500. To illustrate, here are a few, admittedly facetious, searches:
Which is most popular, sex, drugs or rock and roll? Apparently, sex wins by a long shot. Although, it appears to be on the decline lately. Go figure.
Beatles vs. Stones. Despite the Rolling Stones longevity (over 40 years together vs. the Beatles measly 10), the Fab Four have always been the hotter topic among writers.
Beatles vs. Jesus – remember John Lennon’s infamous quote about the Beatles being bigger than Jesus? Apparently, not so much…
And finally, the Civil War…
I expected to see “Civil War” begin a long spike in the 1860s, which it does, but this graph reveals there was another civil war that was widely written about in the mid-1600s. In 1642 the English Civil War took place (also called the English Revolution). Until generating this graph, I was not familiar with the “other civil war” but apparently it was a pretty popular subject back in the day.
Random thoughts about technology, politics and the arts.