What I made on a Sunday afternoon: Spreed, a speed reading Chrome extension


I was annoyed at something: I liked using the speed reading app at http://www.spreeder.com/ to blaze through online content, but I hated all the clicks and copy-pasting needed to do it. When I’m annoyed at something, I see if I can solve it. So I decided to develop a simple speed reading Chrome extension. I had never developed a Chrome extension before (though it’s just javascript + html + css anyways), so this was also a great chance for me to try something new.

7 hours later, I submitted my extension to the Chrome web store. As I receive feedback over the next few weeks and refine my minimum viable product, I’ll post more lessons on what I’ve learned in the process.

Lessons from a “failed” web app: predictd.com


Above is the graph of the number of visits per day to predictd.com, a timelapse stock trading simulator I developed about a year and a half ago. The two big spikes in traffic were when I posted predictd to reddit: once to the main site, once to r/investing, the investing subreddit. Traffic died down after each spike. The seemingly consistent visits during July was when I introduced predictd to the other interns at the hedge fund I worked at this summer: we would have competitions between each other to see who could make the most money.

predictd was not sticky. At least to the average internet user. People didn’t come back to the site weeks, months after they had discovered it. Perhaps it was fundamentally “anti-sticky”, due to the main feature which I had simple-mindedly thought a year and a half ago would make predictd sticky: the leaderboard. My original thought was that competition would bring people back to the site. But I noticed that people who did well got on the leaderboard and didn’t trade after that, preserving their leadership. People who didn’t do well stopped trading out of frustration. So no matter what, people didn’t come back to the site.

I realize now that, aside from predictd’s “anti-stickiness”, my target may have been wrong too. Over the summer I introduced predictd to the other interns I was working with. We were at a hedge fund, all working right next to each other, and so having mini trading competitions on predictd provided fun, relevant breaks. Perhaps implementing such a “tournament-style” or “contained” method of competition would’ve helped predictd’s traffic. Targeting finance professionals, or at least people interested in finance/investing/stock trading, seemed to be a good idea too: the only consistent traffic came from my hedge fund colleagues, and the second spike in traffic (when site was submitted to the r/investing subreddit) surprisingly reached about the same magnitude of visits/day as the first spike (submitted to general reddit).

predictd.com still seems revivable. I’ll have to put it on the backburner though because I have other projects on my plate that I am pursuing in my journey to become a better and smarter web developer. It may have been a “failure” in terms of growth, but I definitely learned some important lessons. There are no failures, only learning experiences.

Building a product that solves a personal pain point, and how that flips validated learning on its head

POMOS was born out of personal frustrations that I had while interning this summer trying to find a Pomodoro timer that I liked. You can read the story here.

I originally wanted to see if people would pay for Pomodoro tagging, so I set up a Google Analytics event tracking javascript snippet to see how many would click the “Sign Up” for the paid Premium account. Only one person clicked the Premium “Sign Up” button during the couple weeks that was running this test, and who knows if they actually wanted to buy or just wanted to see what my payment page looked like. Hypothesis rejected? I then realized it was a little absurd that I was holding back one of POMOS’s core differentiating features from free users: how would they ever know what POMOS was really capable of? How would they ever know that POMOS had value that should be worth something?

So I polished up the Pomodoro tagging feature, which I had been working on for the past few weeks, and released it. I limited free users a little on the number of tags they could have, so they could get a taste of POMOS’s most important feature without having to pay. I also implemented accepting payments (through Stripe) on the same day and pushed it to production as well. Now users could experience the value of POMOS, and I could measure if they actually thought it had value by the number of subscription sign ups. A better hypothesis test/experiment.

Then it hit me: even if all my validated learning experiments failed, and absolutely no one visited the site or found it useful, I still would’ve built POMOS. Because it solves a personal problem that I have. I wouldn’t have taken no for an answer until I built the product that I would want to use. The beauty of building something that solves a problem close your heart is that often times, someone else in the world will have the same problem. Sure, I was collecting all this data on what pages users visited, how many times users went to the “Pricing” page, how many times users clicked the “Sign Up” button for the paid Premium account, etc. But I was always developing the next feature while the experiment was running, and I would release the feature pretty much despite whatever my experiment results told me because I was building something that I wanted to use.

Validated learning in the traditional sense of “is this product valuable”, and being able to decide without spending time developing something no one wants, was thrown out the window: I already knew it would be valuable to me, so I built the product no matter what. But running experiments still has a role in answering questions like “do other people find this product valuable”, and “how do other people find it valuable”.

I’m still a rookie at this web dev, building products thing. But I’ve already learned so much from just doing it, things one can’t learn from just books.

Using a launch page as an MVP to achieve validated learning


I set up a launch page on LaunchRock for Pomos, and set startpomos.com to point to it. The launch page only contained a short description and a box for people to submit their emails.

The point of this page was to test my hypothesis: would people find Pomos useful? Not only that, but to do it in a way that would allow me to learn without spending too much time and energy in building it first.

In the past (e.g. with predictd), I made the mistake of spending months on development, then launching the product, only to find that no one used it. Part of this was because of my newbie web development skills, part of this was building first and exploring the market second.

To use an entrepreneur’s time more efficiently, Eric Ries introduces two key concepts in his book, the Lean Startup. One is validated learning. Validated learning is about coming up with a hypothesis on the company’s growth or value (or both), and testing that hypothesis by collecting the relevant metrics. Often times, this data collection
is done with a minimum viable product (MVP), or a barebones product that is just good enough to get the data you need.

The growth hypothesis is how the entrepreneur thinks the website will grow (referrals, ads/searches, etc.). The value hypothesis is what value the entrepreneur thinks the site provides. I decided to test some value hypotheses first.

The first value hypothesis was “peole think the idea of Pomos is useful, and want to use Pomos to implement the Pomodoro Technique and boost their productivity”. I tested this hypothesis with the launch page, a MVP of sorts. Page views were rather low (I don’t have much of a social network reach…), conversion rates were decent (~10%). I received only a little feedback (even after reaching out to the first signups for feedback): a few were looking forward to it, several who didn’t provide their emails said they were afraid of giving out their emails for something they couldn’t even use yet. Good point. I didn’t want to submit the launch page to blogs, sign up for AdWords, etc. because there wasn’t even a product yet.

I decided to continue developing Pomos (which I was actually doing while waiting for my launch page stats). I considered my value hypothesis supported by the feedback and conversion rate I received. But the biggest factor was that Pomos scratched my own itch: I wished for an app like Pomos everyday at my summer internship this summer, and I often found myself wishing I had an app like Pomos while I was developing it (you can read more about how I came up with the idea for Pomos on the Pomos Help page).

The second value hypothesis is currently being tested. Once the data is in, I will write another post on it.

That’s how I used a launch page as an MVP to achieve validated learning, and have decided to develop and launch Pomos.

Moving on with Rails

After finishing a hectic last week at my summer internship, I finally have some time. So I skimmed through the django tutorial and then started on the first page of the tutorial. Then I stopped. I thought to myself: I have an idea that I want to build; why am I stalling by learning a new framework? I felt like I wanted to “prepare” myself better by learning both Django and Rails. But I need to stop stalling, and realize that often times I need to be able to make decisions without complete information. So now I’m building something with Rails.

What is this. And Rails vs. Django

This blog is to keep me accountable in my web development journey. Excuse the brevity. 

Anyways, I spent the past few weekends going through the Rails Tutorial by Hartl. Which is actually really good, he treats you like a developer and encourages good webdev practices: his whole tutorial utilizes git, heroku, integration testing, etc.

Time to move onto Django. Which a few seemingly knowledgable people on reddit seem to think is a good idea. I think I’ll like it better, I have a stronger background in python anyways.