“The Waste Land” Rap Genius is Hiring: Help Us Annotate the World by Rap Genius Engineering Team

Hands-down the best way to read this poem on the Internet. Here’s a great annotation, picked more or less at random from hundreds of other examples:

http://poetry.rapgenius.com/1653504

This video is processing – it'll appear automatically when it's done.

Show other contributors +

Junot Diaz’s amazing annotations on an excerpt from his own The Brief Wondrous Life of Oscar Wao Rap Genius is Hiring: Help Us Annotate the World by Rap Genius Engineering Team

This video is processing – it'll appear automatically when it's done.

Show other contributors +

POODR Rap Genius is Hiring: Help Us Annotate the World by Rap Genius Engineering Team

http://www.amazon.com/Practical-Object-Oriented-Design-Ruby-Addison-Wesley/dp/0321721330

A classic by Sandi Metz that’s as compact and readable as Ruby itself.

This video is processing – it'll appear automatically when it's done.

Show other contributors +

Gödel, Escher, Bach, an Eternal Golden Braid Rap Genius is Hiring: Help Us Annotate the World by Rap Genius Engineering Team

http://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567

“Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event. This is such a work.” -Martin Gardner, Scientific American

This video is processing – it'll appear automatically when it's done.

Show other contributors +

September 14th, 2013

Probably one of the most important books of the modern era.

Add a suggestion

Metaprogramming Ruby Rap Genius is Hiring: Help Us Annotate the World by Rap Genius Engineering Team

http://www.amazon.com/Metaprogramming-Ruby-Program-Like-Pros/dp/1934356476

How well do you know your eigenclasses and metaclasses? Your modules and method_missing’s? This is the definitive guide to some of Ruby’s most powerful (and dangerous) language features.

Bonus points if you can give a definition of metaprogramming better than “code that writes code.”

This video is processing – it'll appear automatically when it's done.

Show other contributors +

LEMON's photo

4,404

November 30th, 2013

oh yeah python’s great. let’s just throw all() in the global namespace. great idea…. FUCK PYTHON!!

October 12th, 2013

The Disney version? “code that re-imagines itself”

November 27th, 2013

python > ruby

January 15th, 2014

Prompted me to read chapter 6 of why’s poignant guide to ruby once more.That chapter is no less than a poetic introduction to meta programming in ruby.

Add a suggestion

The neat thing about these announcements is that they’re fairly structured When Harvard Met Sally: N-gram Analysis of the New York Times Weddings Section by ATodd 29

And with good reason—the first step toward getting the Times to announce your wedding is to fill out a standardized form:

This video is processing – it'll appear automatically when it's done.

Show other contributors +

I can’t be e.e. Artistic Freedom by Mayor Michael Bloomberg

This is a shout-out to e.e. cummings, who famously wrote his name (and a lot of his poems) in all lowercase.

This video is processing – it'll appear automatically when it's done.

Show other contributors +

Distribution Heroku's Ugly Secret by James Somers (Ft. Andrew Warner, ATodd, Chrissy & LEMON) 41

This video is processing – it'll appear automatically when it's done.

Here's our annotated source Heroku's Ugly Secret by James Somers (Ft. Andrew Warner, ATodd, Chrissy & LEMON) 41

Click here to get the full source on Github

Brief summary
Our goal is to populate a table whose rows represent the lifecycle of every web request within a five-minute window. There are four columns: the request’s start time, its duration, the index of the dyno it was assigned to, and the time it spent in a dyno queue. All times are given in milliseconds.

results = matrix(c(start_times, 
                    req_durations_in_ms, 
                    dyno_assignments, 
                    rep(0, total_requests)), nrow = total_requests, ncol = 4)

To fill out the first column — the list of start times — we imagined that in each millisecond of the simulation, a new request will spawn with probability determined by a Poisson process. More than one request can be spawned per millisecond. The idea is to have 9,000 requests per minute on average. (For simplicity you could get away with distributing the requests uniformly, but a Poisson distribution is truer to life.)

reqs_starting = rpois(simulation_length_in_ms, reqs_per_ms)

The second column we also pre-calculated. We sampled request times from real data given to us by Heroku, summarized in the table in the main text.

(Note that these times do not include in-dyno queuing, just the amount of time actually spent by the app processing each request. The whole goal of the simulation is to back out the queue times.)

rq = read.table("~/Downloads/request_times.txt", header=FALSE)$V1
req_durations_in_ms = sample(rq, total_requests, TRUE)

The last two columns will be filled out by the main loop of the program. That is, for each request we’re trying to (a) assign it to a dyno and (b) figure out how long it will queue.

In the naive routing regime, assigning a dyno is trivial: we just choose one randomly. In the intelligent regime, we ask each dyno when it’s going to be next available, and choose the dyno with the best (soonest) answer. Modulo some processing time (<20ms), this is equivalent to buffering requests at the router and dispatching them only when a dyno frees up.

for(i in 1:nrow(results)) {
  row = results[i,]
  st = row["start_time"]
  duration = row["request_duration"]
  if (router_mode == "naive") { 
    dyno = row["dyno"] 
   } else { 
     dyno = which.min(dyno_next_available) 
   }
}

To calculate time spent queuing, we simply subtract our chosen start time — the time that this request’s assigned dyno will become available — from the current millisecond. And we update each dyno’s “next available” time to account for the time required to service the current request. Rinse and repeat until we’ve run out of requests.

queue_time = dyno_next_available[dyno] – st
results[i, "time_in_queue"] = queue_time
dyno_next_available[dyno] = st + queue_time + duration

When we’re done, we have a large table of requests, and for each one we know whether and for how long it queued.

(Compare this style of simulation to, say, running a bunch of Ruby threads that sleep to mimic dynos processing a request. Here, once we have our distribution of request times, and request start times, our calculations are precise and precisely replicable. We treat each dyno as a vector of milliseconds and painstakingly figure out whether it will be processing or not at each tick. “Queues” of requests are represented by adjacent strings of “yes I’m working” marks in a dyno’s lifetime (its list of 1ms ticks). At any moment of the program’s execution we can interrogate each dyno to see precisely which request it’s serving, which are queued, etc. This is what allows us to make the graphs you see below, and gives us confidence in the correctness of the results.)

This video is processing – it'll appear automatically when it's done.