161 comments posted · 0 followers · following 2
I guess you missed the part where my girlfriend called and got a replacement for the exact same Kindle that they wouldn't replace. Or the point where nearly everyone has gotten a replacement for the same damage or less (because it isn't damage). Or the point where Apple replaced a much more expensive MacBook Air where there actually was screen damage (due to crushing). Or the part where one user has to pay $80 for a replacement and another user gets five replacements for free. All showing that the big issue here is that there is a double standard.
But no matter, I guess it could be worse. I could be a corporate tool who enjoys sucking Jeff Bezos’s cock by trolling on blogs by leaving comments bitching people out for Amazon’s own failure.
In any case, it doesn't matter much to me since I don't own a Kindle anymore. I use the Kindle app.
I'm glad you are learning something other than PHP, if only because better programming is not language dependent. As for me, I'll always be a PHP guy at heart. :-)
Alexa estimates that LinkedIn has the traffic of around 2 billion a day. I don't know where 2 billion/month that Joyent is reporting comes from. Perhaps it was a mistranslation? I find that a little weird since Joyent is in the business of monitoring traffic.
Most likely is your original guess: only a tiny fraction (say 1/30th) of LinkedIn's consumer facing infrastructure is on Joyent (and Rails) and that is doing 2 billion/month. In which case my questions are valid ones:1) Is LinkedIn on Rails? and 2) doesnt that mean there are a lot of larger Rails sites out there?
Hmm this is possibly a good point, but more likely not. It depends on the testing conditions used by DxO. Let me explain.
DxO "Sports" score is a measure of ISO noise. The measure they use is signal-to-noise ratio. My thinking is they standardize the signal so it has the same "size" on sensor. For instance, if the signal is the letter "E" (like a giant letter E), they would make it so that letter E takes up the same percent area on the sensor they are testing. Thus, larger sensors would have lower noise.
However more megapixels in the same sized sensor will not have more "noise" according to this measurement. Because what is being measured would be the amount of noise relative to the signal at the same spacial frequencies necessary to resolve the E, not at the spatial frequency of the sensor.
This is a fair test do you not think? I’d be hard pressed to think comparing noise at the sensors inherent spatial frequency is fair. It also would be hard to test since very few lenses have any aperture with near perfect performance at the spatial frequency of the sensor (no lens, afaik) and thus your test would be highly lens performant. OTOH, it’s pretty easy to find an aperture where any decent lens will have resolution maximized and diffraction effects minimized.
If this wasn't the case, I fail to see how large sensors with lots of megapixels continue to outperform smaller sensors with less (I'll admit it's hard to control for this since less megapixel sensors tend to be older).
More megapixel decreases electron wells but increases the number of points to tease the signal. One thing makes things worse; the other makes things better. But, the better does not occur faster than the worse in many domains. In this case, if we are in a high ISO noise which is electrically gained near the limits, then we are talking about shot noise being dominant and innacuracies due to flaws in the electronic or optical systems are magnified by amplifcation.
Because of this (very likely) possibility, I’d argue that my original view holds. If you are looking at an uncropped image and have a final format in mind, then if the sensor beats the megapixel that is "enough" then you can just use the DxOMark score head-to-head for sensors.
However, images tend to get cropped. We don’t know the final format size of the image. There is a little something called a lens that is a factor also.
Good point. My point is that if you run all your Java code on JRuby, you do not get to call yourself a "Rails site" let alone "the largest Rails site." If they've doing this for a few years now, I would not be surprised if the entire front-end is using a Rails framework. I'd be shocked if they made a complete transition in a couple months—architectural changes are costly and slow (ask Delicious and Friendster—both (IMO) failed transitions to PHP). At what point would you call it "Rails" and not "Java"? Seems like the heavy lifting is not in Rails, and if it is, you're doing something very wrong (see Twitter/Rails or Facebook/PHP or Yahoo Search/PHP or…)
In any case, nobody noticed that the bigger problem I had (in fact, the title of my post) was that I don't think 2 billion monthly counts as the "largest Rails app" by any measure. I've worked for at least two companies (not Rails) that have over 3x those monthlies and both employ less people than LinkedIn. Two billion monthlies is simply not an impressive number anymore—heck Facebook alone probably does many times more than 2 billion each day!
Even if we discount Twitter as a Rails site (I don't know if that's fair), we're still left with Hulu and YellowPages.com (off the top of my head) which I think would be excellent candidates for a Rails site breaking a 2 billion monthly count. There's got to be a couple Facebook apps/games written in Rails that do that right?
In any case, if pages served by Rails is only a fraction of their total and that fraction is at 2 billion, there should be many sites that are bigger Rails sites than them.
They do make a good amount of money, however. :-)