分类目录归档:Twitter

[repost ]Engineer-to-Engineer Talk: How and Why Twitter Uses Scala

original:http://blog.redfin.com/devblog/2010/05/how_and_why_twitter_uses_scala.html

To kick off our San Francisco series of engineer-to-engineer lectures on new technologies and interesting problems in consumer software, we invited in the Great Alex Payne to talk about how Twitter uses Scala, a programming language that combines traits of object-oriented languages and functional languages with an eye toward supporting concurrency better in large-scale software.

Alex started at Twitter in 2007, working remotely in Washington DC, when there were “only one and a half engineers.” Now, Twitter has 170 engineers. “It has been an interesting process,” Alex said. Right after his talk, Alex packed up his cats and headed for Portland, where he’ll still work for Twitter, but ensconced in a smaller, more closely-knit community. Here are his thoughts on Scala (Alex talks fast, and doesn’t waste many word, so my hands were in a rictus of agony from trying to type what he wrote) :

Best, Glenn at Redfin

I started working the programming interface when we were at this very early stage. Now, it handles a couple billion operations every day. It is being baked into more and more of the Web.

I’ve spent the past year working on Twitter’s infrastructure. For that, we use a weird language called Scala. I worked on a book for O’Reilly about Scala that you could sit down with over a three-day weekend to get up to speed on the language.

Why Use Scala?
Why use Scala when you have Ruby and Ruby on Rails? Well, we still use Rails. It works great for front-end stuff. The productivity is worth the tradeoff for working in a slower-performing dynamic language. When you think about what a web framework is doing under the hood, it’s tons and tons of string concatenation. Ruby on Rails can handle that.

What we had a need for as Twitter grew was for long-running heavy processes, message-queuing, caching layers for doing 20,000 operations a second. Ruby garbage-collection is tough, Ruby doesn’t do really well with long-running processes.alex.payne Engineer to Engineer Talk: How and Why Twitter Uses Scala

Languages Twitter Considered
We knew we needed another language. How did we pick a language that was really fun for us? We considered Java, C/C++ of course. And we looked at Haskell and OCaml for functional programming, though neither has gotten much commercial use. Erlang developers are doing stuff with a lot of network I/O but not with a lot of disk I/O; the knowledge-base around the language wasn’t great though, and the community seemed inaccessible.

Java is easy to use, but it’s not very fun, especially if you’ve been using Ruby for a while. Java’s productive, but it’s just not sexy anymore. C++ was barely considered as an option. Some guys said, if I have to work in C++ again, I’m going to stab my eyes out with a shrimp fork. Java-script on the server-side via Rhino had performance problems, and it wasn’t quite there yet when we were evaluating it.

So what were our criteria for choosing Scala? Well first we asked, was it fast, and fun, and good for long-running process? Does it have advanced features? Can you be productive quickly? Developers of the language itself had to be accessible to us as we’d been burned by Ruby in that respect. Ruby’s developers had been clear about focusing it on fun, even sometimes at the expense of performance. They understood our concerns about enterprise-class support and sometimes had other priorities.

We wanted to be able to talk to the guys building the language, not to steer the language, but at least to have a conversation with them.

Was Scala Fast?
And did Scala turn out to be fast? Well, what’s your definition of fast? About as fast as Java. It doesn’t have to be as fast as C or Assembly. Python is not significantly faster than Ruby. We wanted to do more with fewer machines, taking better advantage of concurrency; we wanted it to be compiled so it’s not burning CPU doing the wrong stuff.

What Alex Likes About Scala
Scala is a lot of fun to work in; yes, you can write staid, Java-like code when you start. Later, you can write Scala code that almost looks like Haskell. It can be very idiomatic, very functional — there’s a lot of flexibility there.

And it’s fast. The principal language developer at Scala worked on the JVM at Sun. When Java started, it was clearly a great language, but the VM was slow. The JVM has been brought to the modern age and we don’t think twice about using it.

Scala can borrow libraries from Java libraries; you’re compiling down to Java byte code, and it’s all calling back and forth in a way that is really efficient. We haven’t run into any library dependencies that cause problems. We can hire people with Java and they can do pretty well.

The community is small but growing, and it’s really accessible. We got to sit down with Martin and ask him and his team about funding for Scala, how problems with Scala will get solved. We’ve never really had to call on that level of access, but it’s really nice to know it’s there.

The Grand Unified Theory of Scala
The grand unified theory of Scala is that it combines objective-oriented programming (OOP) and functional programming (FP). Scala’s goal is to essentially say OOP and FP don’t have to be these separate worlds. It’s kind of zen, and you don’t get it when you first start. It’s really, really powerful; it’s nice to have a language with a thesis, rather than trying to appeal to every programmer out there. Scala is trying to solve a specific intellectual problem.ScalaLogo 300x88 Engineer to Engineer Talk: How and Why Twitter Uses Scala

You have methods that take anything between a string and several point away on the inheritance chain from a string. The syntax is more flexible than Java; it’s very human-readable, as you can leave out period between method calls so it looks like a series of words. Your program can make nice declarative statements about the logic of what you’re trying to do.

Traits, Pattern-Matching, Mutability
With Scala, you can also use traits. This is handy because of course you have cross-cutting concerns in your application. For example, every object needs to be able to log stuff, but you don’t want everything extending from a logger class — that’s crazy. With Scala, you can use a trait to shove that right in, and you can add as many traits as you like to a given class or object.

You can choose between mutability and immutability. This can be dangerous. 9 out of 10 times you use immutable variables when you want predictability, especially when you have stuff running concurrently. But Scala trusts the programmer for mutability when he or she needs it.

Scala has the concept of lazy values – you can say lazy val x = a really complicated function. That isn’t going to be calculated until the last second, when you need that value. This is nice.

Pattern-matching is nice too. It lets you dive into a data structure so you can, for example, explode out a collection that matches an array with “2” as its third element. You can break out strings and regular expressions, and you can pattern-match groups with regular expressions.

An oddball feature that is really useful is the ability to use XML literals, so that you can make something equal to an XML literal, as if the XML literal is a string. You don’t have to import Sax or some crazy XML library.

The Concurrency Story
When people read about Scala, it’s almost always in the context of concurrency. Concurrency can be solved by a good programmer in many languages, but it’s a tough problem to solve. Scala has an Actor library that is commonly used to solve concurrency problems, and it makes that problem a lot easier to solve.

An Actor is an object that has a mailbox; it queues messages and deals with them in a loop, and it can leave a message on the floor when it doesn’t know what to do with it.

You can model concurrency as messages – a unit of work — sent to actors, which is really nice. It’s like using a queuing system. You can also use Java.util.concurrency stuff too, Netty and Apache Mina, dropping it right in. You can rewrite the Actor implementation, and some folks have gone so far as rolling their own software transactional memory libraries.

Java interoperability is a big, big win. There are ten years of great libraries, things like Jodatime. We use a lot of Hadoop and it has been easy to wire Scala to the Hadoop libraries. We use Thrift, without having to patch it; we use libraries from the Apache Commons and from Google.

How Twitter Uses Scala
So that’s why we use Scala, but how do we use it?

In the enterprise world, a service-oriented architecture is not new, but in Web 2.0 it is crazy new science. With PHP or Ruby on Rails, when you need more functionality, you just include more plugins and libraries, shoving them all in to the server. The result is a giant ball of mud.

So anything that has to do heavy lifting in our stack is going to be an independent service. We can load-test it independently, it’s a nice way to decompose our architecture.

What services at Twitter are Scala-powered? We have a queuing system called Kestrel. It uses a souped-up version of the mem-cache protocol. We originally wrote it in Ruby — it got us through a few weeks, but because Ruby is a dynamic language, the service began to show its performance weak spots.

Flock to Store the Social Graph
We use Flock to store our social graph, as a denormalized list of user ids. It’s not a graph database, so you can’t perform random walks along the graph. But it’s great for quickly storing denormalized sets of user ids, and doing intersections. We’re doing 20,000 operations a second right now, backed by a MySQL schema designed to keep as much as possible in memory. It has been very efficient — not many servers are needed.

Hawkwind for People Search
Our people-search is powered by a Scala-built service we called Hawkwind. It’s a bunch of user objects dumped out by Hadoop, where the request is fanned out to multiple machine and then pulled back together.

Hosebird for Streaming
We stream out tweets to public search engines, using a low-latency, HTTP-based, persistent connection system called Hosebird. We looked at queuing systems that financial-services companies use, but couldn’t find anything that could handle the volume of the load. We built something on top of Jetty using Scala. We have more Scala-powered services in the works that I can’t talk about.

Thrift for Transferring Data
We use also Thrift, built at Facebook then open-sourced at Apache. With Thrift, you can define data structures and methods, and it deals with everything you don’t want to deal with to efficiently represent data and get it from point A to point B. As your system evolves, your method signatures change, and Thrift has a nice system for creating positional arguments and being backwards compatible.

These services make our life a lot easier. We often staff projects with two people who are pair programming, sitting together for six or eight weeks. These guys can build something like people-search in a couple of months.

The only problem with so many different teams is that there is some divergence in terms of operational approaches – we have to work with ops guys to monitor the right stuff, be it disk or memory or what have you — but we can resolve that jitter over time. We’re ok with the tradeoffs.

The Development Environment
OK, now let’s talk about the tools… the IDEs for Scala are not up to snuff, that is true. IntelliJ IDEA is good but it’s shockingly buggy. The solution we’ve settled on is just using a plain text editor. We use EMACS, as there’s a really nice mode for the build tool. That takes compile/test BS out of your workflow. Of course, you can give the IDEs a try. Even though I’m an IDE cynic, maybe they’ve improved; that said, a plain text editor can be really productive.

Simple Build Tool
sbt is our Simple Build Tool, but it’s not simple or limited in any way. It’s Scala’s answer to Ant and Maven, and really it’s a superset of Ant and Maven. It’ll set up a new project, create a nice project structure for you and manage dependencies — you can slap ‘em right in by copying XML.

You can write your own build-tasks. We added support for Thrift in an afternoon; it’s got a library for shelling out, as Java is not so great at shell operations because it targets so many platforms. sbt is well well documented. And the absolutely coolest feature is that it’s got an interactive console interface where you can type in code and see how it works.

So that means sbt can insert you in an interactive way into your running program. This is great for debugging, great for sketching code out. You have a nice workflow where you don’t have to worry about compilation.

specs
We’re very test-driven, we’re not wedded to behavior-driven development (BDD), but the best library in Scala is BDD-oriented. You can throw in different mocking libraries, and it works just as well in Scala as Java.

Libraries
We’ve built a bunch of libraries. We gather a lot of stats, I mean, A LOT. We spent the first year of Twitter pushing forward on features, but never thinking about what we were building scientifically. That bit us in the ass in a big way.

You’ve probably seen a gradual increase in stability. At conferences, people ask us if it was the switch from Ruby to Scala, or if it was more machines. But really what did it was gathering numbers on everything, setting metrics and trying to improve.

Ostrich helps here. It is an in-process statistics gatherer, with counters, gauges, timers. You can share stats via JMX, JSON-over-HTTP etc. Hopefully it’s pretty simple to use and easy to integrate.

Configgy manages configuration files and logging in a really nice, flexible way. You can include config files in one another and you can do inheritance; it throws in a really nice logging wrapper, with lazy evaluation on the values you’re trying to log so you don’t burn machine-time generating log statements. It has a subscription API for pushing out a new config file. It’s a little crazy to have our own config file format, but Scala makes it work.

xrayspecs: this is an extension to specs, because we need a way to test concurrent operations. Some of the extensions in xrayspecs have been merged back into specs. We can freeze and unfreeze time.

scala-json: this is a better Scala JSON codec. We’ve used this really heavily in production for a while. If you need something like this, hopefully it’ll do the job.

Other Twitter Scala libraries: Naggatti (protocol builder for Apache Mina), Smile (Actor-powered memcached client), Querulous (a nice SQL database client) and Jackhammer (a load testing framework in its early stages). Check out GitHub for more.

How Do we Teach People?
I think we’re employing at Twitter about half the people in the world who know the Scala language. The other half are academics or at Foursquare. Even though Scala’s getting more and more popular, fundamentally we can’t hire people with experience in the language.

Pair Programming, Code Reviews
To start people out, we pair program. It isn’t mandatory at Twitter, but it’s a great way to learn Scala. We’ve come up with a bunch of style guides. The good and bad thing is that Scala’s going to be C++ in ten years, because there’s just a lot of surface area and it can get complicated. For that reason, we are pretty rigorous about a style code.

We do code reviews; it doesn’t go into the master branch if it hasn’t been reviewed by your peers. Right now, I’m working with a guy we hired from Google. He’s an amazing engineer, far better than I am, but at first he didn’t know Scala.

When I looked at his code, there was absolutely nothing wrong under the hood. But we’d go through and say, “Here’s where this line could be a little more idiomatic from a Scala perspective.” I do classes over lunch – but you need a big group to commit to come every week. Then there’s my book, and there’s other books: Dave Pollak’s bookthe Odersky book (Programming in Scala, aka “the stairway book”). If you learn by example and need a desk reference, grab “the stairway book.” Or search Google for a talk by my co-worker on “The Seductions of Scala” for lots of examples

What Version of Scala Does Twitter Use?
We use 2.7. It’s got a couple of warts, particularly in the collections classes. Scala 2.8 fixes a lot of those warts, and there’s a bunch of performance work in there too, plus the ability to have named arguments in your functions.Jeff Hammerbacher Engineer to Engineer Talk: How and Why Twitter Uses Scala

I’m co-organizing a Scala summit at the OSCON conference in Portland this summer; come to that if you want to learn more! There’s a great blog calledDailyScala, where an engineer writes about what he’s learning. I learn stuff from that guy all the time…

And that was it! Many thanks to Alex for his magnificent talk, and to all the lovely folks who visited our offices! We had a lot of fun, we learned a ton, and now we’re looking forward on May 20 to hearing from Cloudera’s Jay Hammerbacher — the man who conceived of and built the data team at Facebook — on Hadoop. Everyone’s invited!

[repost ]Chirp 2010: Scaling Twitter

original:http://www.slideshare.net/netik/billions-of-hits-scaling-twitter

 

[repost ]Yahoo! s4和Twitter storm的粗略比较

original:http://rdc.taobao.com/team/jm/archives/1202

Items\Projects Yahoo! s4 Twitter Storm
协议 Apache license 2.0 Eclipse Public License 1.0
开发语言 Java Clojure,Java,Clojure编写了核心代码
结构 去中心化的对等结构 有中心节点nimbus,但非关键
通信 可插拔的通讯层,目前是基于UDP的实现 基于facebook开源的thrift框架
事件/Stream <K,A>序列,用户可自定义事件类 提供Tuple类,用户不可自定义事件类,
但是可以命名field和注册序列化器
处理单元 Processing Elements,内置PE处理
count,join和aggregate等常见任务
Bolt,没有内置任务,提供IBasicBolt处理
自动ack
第三方交互 提供API,Client Adapter/Driver,第三方客户端输入或者输出事件 定义Spout用于产生Stream,没有标准输出API
持久化 提供Persist API规范,可根据频率或者次数做
持久化
无特定API,用户可自行选择处理
可靠处理  无,可能会丢失事件  提供对事件处理的可靠保证(可选)
路由 EventType + Keyed attribute + value匹配
内置count,join和aggregate标准任务
Stream Groupings:
Shuffle,Fields,All,Global,None,Direct
非常灵活的路由方式
多语言支持  暂时只支持Java 多语言支持良好,本身支持Java,Clojure,
其他非JVM语言通过thrift和进程间通讯
Failover  部分支持,数据无法failover  部分支持,数据同样无法failover
Load Balance 不支持  不支持
 并行处理  取决于节点数目,不可调节  可配置worker和task数目,storm会尽量将worker和task均匀分布
动态增删节点 不支持  支持
动态部署  不支持  支持
web管理  不支持  支持
代码成熟度  半成品  成熟
活跃度  低  活跃
编程  编程 + XML配置   纯编程
参考文档  http://docs.s4.io/ https://github.com/nathanmarz/storm/wiki/
http://xumingming.sinaapp.com/category/storm/ (非常好的中文翻译)

[repost ]SpiderDuck与NoSQL – Twitter实时URL抓取服务架构

original:http://blog.nosqlfan.com/html/3457.html

最近Twitter开发者博客上发表了一篇文章,向大家介绍了Twitter的URL抓取服务SpiderDuck,其中用到了CassandraHDFS和Memcached作为存储部件。是一个了解NoSQL使用方法的好例子。

SpiderDuck的架构如下图所示:

其分为下面几个部分:

  • Kestrel: 是一个Twitter在使用的队列服务,这里将所有需要抓取的URL放入此队列中。
  • Schedulers(调度器):调度器主要负责如下一些工作:在抓取前决定这个URL是否抓取(在最近N天内抓取过的不再进行抓取),在抓取中对跳转进行处理、以及对抓取工作进行调度,在抓取工作完成后,解析抓取内容,分析出其metadata信息,然后将此信息存入Metadata Store,并将抓取到的内容存到Content Score中。调度器是队列的处理程序,所以调度器之间互不依赖,能够很好的进行横向扩展。
  • Fetchers(抓取器): 这是一个提供 Thrift 接口的抓取服务,主要工作是抓取URL内容,同时其会通过对网站的robots.txt进行分析,从而进行抓取频率控制,以使通过频率变化进行相应的横向扩展。
  • Memcached: 这是一个用Memcached构建的分布式缓存系统 ,主要是为抓取器缓存robots.txt文件内容。
  • Metadata Store:这是一个基于Cassandra的分布式hash table,用于存储URL内容的meta信息与URL的映射关系。同时对外提供实时的对metadata的请求服务。
  • Content Store: 这是一个 HDFS 集群,用于保存所有抓取到的内容实体,协调器通过Scribe将数据写入HDFS中。

NoSQLFan后记:Cassandra曾因Twitter而一夜成名,尔后Twitter因为架构变迁放弃了Cassandra的使用,再到后面使用Cassandra的一些大公司相继出现一些问题,导致Cassandra最近一直不温不火。此次在实时URL抓取服务SpiderDuck中,Twitter使用了Cassandra来存储重要的metadata信息,相信对各位持观望态度的同学又是一记强心针。

SpiderDuck的详细介绍,可以参见原文:engineering.twitter.com

[repost ]开放实时数据处理平台 Twitter Storm

original:http://www.oschina.net/p/twitter-storm

Twitter将Storm正式开源了,这是一个分布式的、容错的实时计算系统,它被托管在GitHub上,遵循 Eclipse Public License 1.0。Storm是由BackType开发的实时处理系统,BackType现在已在Twitter麾下。GitHub上的最新版本是Storm 0.5.2,基本是用Clojure写的。

Storm为分布式实时计算提供了一组通用原语,可被用于“流处理”之中,实时处理消息并更新数据库。这是管理队列及工作者集群的另一种方式。 Storm也可被用于“连续计算”(continuous computation),对数据流做连续查询,在计算时就将结果以流的形式输出给用户。它还可被用于“分布式RPC”,以并行的方式运行昂贵的运算。 Storm的主工程师Nathan Marz表示:

Storm可以方便地在一个计算机集群中编写与扩展复杂的实时计算,Storm之于实时处理,就好比 Hadoop之于批处理。Storm保证每个消息都会得到处理,而且它很快——在一个小集群中,每秒可以处理数以百万计的消息。更棒的是你可以使用任意编程语言来做开发。

Storm的主要特点如下:

  1. 简单的编程模型。类似于MapReduce降低了并行批处理复杂性,Storm降低了进行实时处理的复杂性。
  2. 可以使用各种编程语言。你可以在Storm之上使用各种编程语言。默认支持Clojure、Java、Ruby和Python。要增加对其他语言的支持,只需实现一个简单的Storm通信协议即可。
  3. 容错性。Storm会管理工作进程和节点的故障。
  4. 水平扩展。计算是在多个线程、进程和服务器之间并行进行的。
  5. 可靠的消息处理。Storm保证每个消息至少能得到一次完整处理。任务失败时,它会负责从消息源重试消息。
  6. 快速。系统的设计保证了消息能得到快速的处理,使用ØMQ作为其底层消息队列。
  7. 本地模式。Storm有一个“本地模式”,可以在处理过程中完全模拟Storm集群。这让你可以快速进行开发和单元测试。

Storm集群由一个主节点和多个工作节点组成。主节点运行了一个名为“Nimbus”的守护进程,用于分配代码、布置任务及故障检测。每个工作节 点都运行了一个名为“Supervisor”的守护进程,用于监听工作,开始并终止工作进程。Nimbus和Supervisor都能快速失败,而且是无 状态的,这样一来它们就变得十分健壮,两者的协调工作是由Apache ZooKeeper来完成的。

Storm的术语包括Stream、Spout、Bolt、Task、Worker、Stream Grouping和Topology。Stream是被处理的数据。Sprout是数据源。Bolt处理数据。Task是运行于Spout或Bolt中的 线程。Worker是运行这些线程的进程。Stream Grouping规定了Bolt接收什么东西作为输入数据。数据可以随机分配(术语为Shuffle),或者根据字段值分配(术语为Fields),或者 广播(术语为All),或者总是发给一个Task(术语为Global),也可以不关心该数据(术语为None),或者由自定义逻辑来决定(术语为 Direct)。Topology是由Stream Grouping连接起来的Spout和Bolt节点网络。在Storm Concepts页面里对这些术语有更详细的描述。

可以和Storm相提并论的系统有Esper、Streambase、HStreaming和Yahoo S4。其中和Storm最接近的就是S4。两者最大的区别在于Storm会保证消息得到处理。这些系统中有的拥有内建数据存储层,这是Storm所没有的,如果需要持久化,可以使用一个类似于Cassandra或Riak这样的外部数据库。

入门的最佳途径是阅读GitHub上的官方《Storm Tutorial》。 其中讨论了多种Storm概念和抽象,提供了范例代码以便你可以运行一个Storm Topology。开发过程中,可以用本地模式来运行Storm,这样就能在本地开发,在进程中测试Topology。一切就绪后,以远程模式运行 Storm,提交用于在集群中运行的Topology。Maven用户可以使用clojars.org提供的Storm依赖,地址是 http://clojars.org/repo。

要运行Storm集群,你需要Apache Zookeeper、ØMQ、JZMQ、Java 6和Python 2.6.6。ZooKeeper用于管理集群中的不同组件,ØMQ是内部消息系统,JZMQ是ØMQ的Java Binding。有个名为storm-deploy的子项目,可以在AWS上一键部署Storm集群。关于详细的步骤,可以阅读Storm Wiki上的《Setting up a Storm cluster》

[repost]Why are Facebook, Digg, and Twitter so hard to scale?

original:

Why are Facebook, Digg, and Twitter so hard to scale?

Real-time social graphs (connectivity between people, places, and things). That’s why scaling Facebook is hard says Jeff Rothschild, Vice President of Technology at Facebook. Social networking sites like Facebook, Digg, and Twitter are simply harder than traditional websites to scale. Why is that? Why would social networking sites be any more difficult to scale than traditional web sites? Let’s find out.

Traditional websites are easier to scale than social networking sites for two reasons:

  • They usually access only their own data and common cached data.
  • Only 1-2% of users are active on the site at one time.

Imagine a huge site like Yahoo. When you come to Yahoo they can get your profile record with one get and that’s enough to build your view of the website for you. It’s relatively straightforward to scale systems based around single records using distributed hashing schemes. And since only a few percent of the people are on the site at once it takes comparatively little RAM cache to handle all the active users.

Now think what happens on Facebook. Let’s say you have 200 friends. When you hit your Facebook account it has to go gather the status of all 200 of your friends at the same time so you can see what’s new for them. That means 200 requests need to go out simultaneously, the replies need to be merged together, other services need to be contacted to get more details, and all this needs to be munged together and sent through PHP and a web server so you see your Facebook page in a reasonable amount of time. Oh my.

There are several implications here, especially given that on social networking sites a high percentage of users are on the system at one time (that’s the social part, people hang around):

  • All data is active all the time.
  • It’s hard to partition this sort of system because everyone is connected.
  • Everything must be kept in RAM cache so that the data can be accessed as fast as possible.

Partitioning means you would like to find some way to cluster commonly accessed data together so it can be accessed more efficiently. Facebook, because of the interconnectedness of the data, didn’t find any clustering scheme that worked in practice. So instead of partitioning and denormalizing data Facebook keeps data normalized and randomly distributes data amongst thousands of databases.

This approach requires a very fast cache. Facebook uses memcached as their caching layer. All data is kept in cache and they’ve made a lot of modifications to memcached to speed it up and to help it handle more requests (all contributed back to the community).

Their caching tier services 120 million queries every second and it’s the core of the site. The problem is memcached is hard to use because it requires programmer cooperation. It’s also easy to corrupt. They’ve developed a complicated system to keep data in the caching tier consistent with the database, even across multiple distributed data centers. Remember, they are caching user data here, not HTML pages or page fragments. Given how much their data changes it’s would be hard to make page caching work.

We see similar problems at Digg. Digg, for example, must deal with the problem of sending out updates to 40,000 followers every time Kevin Rose diggs a link. Digg and I think Twitter too have taken a different approach than Facebook.

Facebook takes a Pull on Demand approach. To recreate a page or a display fragment they run the complete query. To find out if one of your friends has added a new favorite band Facebook actually queries all your friends to find what’s new. They can get away with this but because of their awesome infrastructure.

But if you’ve ever wondered why Facebook has a 5,000 user limit on the number of friends, this is why. At a certain point it’s hard to make Pull on Demand scale.

Another approach to find out what’s new is the Push on Change model. In this model when a user makes a change it is pushed out to all the relevant users and the changes (in some form) are stored with each user. So when a user want to view their updates all they need to access is their own account data. There’s no need to poll all their friends for changes.

With security and permissions it can be surprisingly complicated to figure out who should see an update. And if a user has 2 million followers it can be surprisingly slow as well. There’s also an issue of duplication. A lot of duplicate data (or references) is being stored, so this is a denormalized approach which can make for some consistency problems. Should permission be consulted when data is produced or consumed, for example? Or what if the data is deleted after it has already been copied around?

While all these consistency and duplications problems are interesting, Push on Change seems the more scalable approach for really large numbers of followers. It does take a lot of work to push all the changes around, but that can be handled by a job queuing system so the work is distributed across a cluster.

The challenges will only grow as we get more and more people, more and deeper inter-connectivity, faster and faster change, and a greater desire to consume it all in real-time. We are a long way from being able to handle this brave new world.