Free flashcards for the fantastic book on Customer Development, The Mom Test

I made this set of flashcards for the excellent book on Customer Development, The Mom Test, since the process and questions in the book are quite important and worth memorizing. Learn what the pain points are by talking to customers first, before building. Feedback on the flashcards welcome! Enjoy ūüôā http://quizlet.com/_r26z6

How our talented team won $2500 at the TechCrunch Disrupt NYC Hackathon

corpsquare_screen1

We had an absolutely amazing and talented team at the TechCrunch Disrupt NYC 2014 Hackathon! Shout outs to our awesome front end designers Amanda Gobaud and Michelle Lee, and our tireless devs, Amine Tourki, Andrew Furman, and Teddy Ku. Here are the lessons that I learned from building a web application that won the $2500 Concur Technologies API first place prize.

  • Our app, CorpSquare (Concur + Foursquare), solved a problem. Several¬†of the team members (me included) used Concur in the companies we worked for. So we had experience with problems or cool and practical use cases that an app designed around the Concur API could do.¬†Even the ¬†Concur VP of Platform Marketing told us afterwards that he had seen many with the problem we were trying to solve.
  • But, we also played the game strategically. Concur is a business expense tracking platform; most of their clients are big businesses. We felt that a business expense API wouldn’t seem as “exciting” or “sexy” as some of the other consumer-facing¬†start-up APIs (Evernote, Weather Underground, to name a few). Since the different companies who sponsored¬†the hackathon had API specific rewards for teams that used their API in the coolest way, this implied that there might be less competition for the Concur API reward. We made a¬†“value” bet of sorts, as value investors would say–the strategy¬†seems to have paid off.
  • Our team’s skills were complementary,¬†but¬†not too much so. A good hackathon team probably needs¬†both design and dev skills, and different people should specialize in one or the other to make things most efficient.¬†But, everyone should be well versed enough in non-specialty skills (like designers in dev, devs in design) to be able to communicate efficiently. For example, our designers were comfortable with both UI/UX design as well¬†front end development like CSS. Several of our developers were full-stack, implementing the back end but also helping out with the front end. We used technologies (frameworks, languages) that we were all comfortable with, which, perhaps out of coincidence for us, was also an advantage.
  • Presentation matters, a lot.¬†Our two wonderful front end designers spearheaded the movement to make our web application beautiful. With the help of everyone, beautiful it was. For the actual 60 second demo, we also selected the most energetic and enthusiastic speakers to present. First impressions matter, but when you’re being explicitly judged in comparison to at least 250 other people, and¬†60 seconds¬†of talking and app visuals is all you’ve got, first impressions¬†really matter.

Hindsight is 20/20, of course. Causally linking our tactics and strategies to our success is fuzzy at best. But learning never stops; whatever happens, success or failure, there is always something to take away and improve yourself, and others, with.

Spreed – the exciting journey so far, and lessons learned

spreed_step2_cropped_raw

Spreed, the speed reading Chrome extension I developed last year to scratch my own itch, recently took off in popularity. People wrote about it in a few different places, and our installs in Chrome went up dramatically. The journey has just begun, but I’ve already learned some¬†lessons that I wanted to share.

Lessons learned

  • Piggybacking on buzz can be an effective technique to increase awareness
    • We piggybacked (not deliberately) on the buzz created by the launch of Spritz, the speed reading startup. People wanted to learn more about speed reading, and came across our Chrome extension when they searched for it. We could have done better if we had optimized our web presence for the keyword “Spritz” after the launch, but my¬†excitement at going from 2k installs to 20k installs in less than 5 days blinded me. Which leads me to my next lesson…
  • Be aware of emotions, instead of letting them take control
    • My excitement at our growth caused me to naively focus¬†on vanity metrics like installs and visits, which blinded me to the SEO opportunity mentioned above.
    • Another example: I recently almost made a grossly sub-optimal decision regarding the outsourcing of development. Again, I let excitement and optimism tempt me to “forget” to use¬†a disciplined decision making approach. The particular one I like to use is called the WRAP technique (pdf), which I learned from the fantastic book¬†Decisive,¬†by the Heath brothers.
  • To quote Steve Jobs: “A lot of times, people don’t know what they want until you show it to them”
    • We’ve not only developed the features that our users have said would be most helpful to them, we’ve developed (and are developing) game changing features that ¬†we anticipate users will find immensely¬†helpful. We test our hypotheses by collecting feedback from users and doing small tests/experiments. The lesson here, I think applicable to all of life, and not just product development: be proactive instead of just reactive.

What has been most exciting has been working with our users to make Spreed the most helpful it can be. Building things that help people, having those people reach out to thank you, and then having conversations with them to make the product even better has been extremely meaningful. Some excerpts from our most enthusiastic and dedicated users:

“Your chrome app is phenomenal. I have been using it for 4 days now, and still find it hard to believe that such a basic app can change one’s life so much.”

“Thank you so much, this has revolutionized my life.”

“I am a dyslexic and I have always had difficulty reading with full comprehension. ¬†I can’t believe how this has changed this for me. ¬†I can read at 350 words with great comprehension. ¬†What happens for dyslexics is the words flow together sometimes forming new words that aren’t there. ¬†With this app I see only the word! ¬†It is going to be a life changer for me.”

There’s still a lot more to do, but I’m looking forward to¬†the future. Learn by doing and building, strive to help others, and¬†the journey will be an exciting one.

Shout out to Ryan Ma for the beautiful redesign of the Spreed Chrome extension!

 

Weekend hack: AngelList Alumni Bot

screenshot

Ok, it’s more of a scraper than a “bot”. But the reason I developed it was because I was looking through NYC startups on AngelList, and wanted to find founders who had graduated from my alma mater, the University of Pennsylvania. I didn’t want to click through the AngelList startup pages one by one and then click on every founder. There ¬†was no easy way of doing what I wanted, and I also wanted to get to know the AngelList API a little better.

The AngelList Alumni Bot basically gets all startups given an input city (e.g. NYC), grabs the founder’s name, and checks AngelList or LinkedIn to see if they are a graduate of an input school (e.g. University of Pennsylvania).

There are a lot of areas for improvement (e.g. it’s not a web app, it’s really slow, it currently only supports two cities/locations NYC and SV and one school UPenn, it only grabs one founder for each start-up in a very hacky way by exploiting AngelList page meta tags). You can make contributions to the source code at¬†https://github.com/troyshu/angellistalumnibot.

Everything was done in Python. I used and extended, this AngelList API Python wrapper: my extended version is at https://github.com/troyshu/AngelList.

My first webapp built on a framework: wtfconverter

wtfconverter

www.wtfconverter.appspot.com converts between common units of measurement (e.g. liters, seconds, etc) and silly units (e.g. butts, barns, etc.).

It was the first web application that I had developed using a web framework, in this case the webapp2 framework, on Google App Engine. This was two and a half years ago. Before that, I had developed everything from scratch, using PHP and MySQL for the backend.

This introduction to web frameworks intrigued me, and is what jump-started my journey into Ruby on Rails. Pushing local code to the Google App Engine production server and just having the site work blew my mind. Templating (the GAE tutorial taught how to use jinja2) was like magic, creating and managing dynamic content was so much easier.

I started out by following the GAE Python tutorial word for word, which walked the user through actually building a site. Then I developed my own little webapp that was a little more useful and wasn’t much more complicated than what I had learned in the tutorial. This is exactly how I learned Ruby on Rails too: I walked through the Rails tutorial, building a microblogging app along with the author. Then I built my own web app, Pomos the Pomodoro Technique timer, using what I learned from the tutorial. Pomos has since been deprecated, but here’s a screenshot:

pomos

 

Anyways, I learned a lot from following these tutorials where I actually developed something concrete, and then branching off to do my own thing. This is the heart of experiential learning, and what Sal Khan, founder of Khan Academy, talks about in his book One World Schoolhouse; when the student has ownership of his education by actually applying it, e.g. by building something, he is much more likely to enjoy learning new knowledge and skills. But reforming the current state education is a topic for another post.

My holiday break project: quarterly earnings snapshot webapp

Link: www.qesnapshot.herokuapp.com

qesnapshot

Problem: I just wanted to find out how much a company’s current quarterly earnings grew compared to their earnings for the same quarter last year (called year-over-year). I wanted this info for companies that just released their quarterly earnings recently (e.g. within the past few days), so I could generate new investment ideas. “I see AEHR released earnings recently, on December 23. How did its 2013 Q4 earnings grow from 2012 Q4 earnings?”

There’s no easy way to do this. The current options are:

  1. go to an earnings calendar site like¬†Bloomberg’s, then look up the symbol and find quarterly earnings on a site like Morningstar, calculate the growth % number yourself OR manually¬†find and sift through press releases to find earnings growth %
  2. pay a ton for data that tells you, through an API or web interface that isn’t at all user friendly

Solution:¬†Quarterly Earnings Snapshot is a webapp that scrapes an earnings calendar and then scrapes SEC EDGAR filings for companies’ recently released, and historical, quarterly earnings numbers. It displays earnings per share (EPS) and year-over-year (same quarter) EPS growth in an easy to read format so I can get the relevant numbers I need at a glance.

After being in development for only a couple days, the webapp has already helped me generate new stock investing ideas quickly. For example, a few days ago, i checked the site and saw that KBH (KB Home) had released earnings a week or two ago on Dec 19, and that earnings per share had grown a whopping 671% (see below screenshot).

qesnapshot_example

This prompted me to do more research on KBH, as well as its competitors in the home-building industry, an industry that seems to be rebounding from a bottom. Some homebuilder stocks have already risen a lot, others are still undervalued, and  so present potential investment opportunities.

Feedback and comments are always welcome! I know there are many different features I could add, many different directions i could take this. My short term goal over the holidays was just to build something simple in both design and usage, and to share it.

Thanks for reading. Happy holidays and happy new year!

PS: the site is Ruby on Rails + Heroku. Extremely grateful, rapidly prototyping webapps for free/cheap would not be possible without them.

adaptivwealth: the new web app that I made to bring adaptive asset allocation to the masses

adaptivwealth: www.adaptivwealth.herokuapp.com

adaptivwealth

I recently finished the¬†beta version of a web app I’ve been building, a web app that brings adaptive asset allocation to the masses.

What is adaptive asset allocation?

I’ve written about it in several previous posts. Essentially, it’s the idea that traditional Markowitz mean-variance¬†asset allocation¬†can be improved–generating portfolios that have better risk-adjusted performance–by making the models more adaptive to market changes.

What’s the point of the web app?

adaptivwealth’s goal is to make models that try to¬†improve upon the weaknesses of traditional asset allocation¬†more accessible to individual investors.

Asset allocation–allocating one’s money to different asset classes such as equities, bonds, and commodities–often produces more diversified portfolios than, for example, just picking stocks. Portfolios constructed using asset allocation can have decreased risk and increased returns (see the above screen shot of the performance of the¬†Minimum Variance Portfolio¬†vs. the performance of the S&P 500 for an example). A portfolio’s holdings can be optimized such that return is maximized given a level of risk.¬†Asset allocation is powerful:¬†the famous Brinson, Hood, and Beebower study¬†showed that asset allocation is responsible for 91.5% of pension funds‚Äô returns. Not stock selection, not market timing.

Asset allocation is traditionally not very accessible to individual investors. Individual investors have data, computation, knowledge, and/or time constraints that prevent them from running asset allocation algorithms to optimize their portfolios; asset allocation services are usually performed by financial advisers for individual investors, and large institutions like pension funds and hedge funds obviously have the resources to do it for themselves. Companies like https://www.wealthfront.com/ are closing this gap, taking out the middle man, financial advisers, and lowering the costs of implementing asset allocation for the individual investor.

Companies like wealthfront implement traditional asset allocation algorithms. adaptivwealth differentiates itself by using models that try to improve upon the weaknesses of traditional asset allocation, and by making these models more accessible to individual investors. One approach to addressing the weakness of traditional asset allocation is by making the models more adaptive to market changes.

A call for help

adaptivwealth is still very rough around the edges, and I have a whole list of features that I want to implement, ideas for growth, etc. But I wanted to get a minimum viable product out there and collect feedback as quickly as possible. Let me know your thoughts! Questions, suggestions for features, advice, criticisms, anything and everything helps. Thank you.

Naive Bayes classification and Livingsocial deals

Naive Bayes

Problem: I was planning my trip to Florida¬†and looking for fun things (“adventure” activities like jet ski rentals, kayaking, and go karting) to do in Orlando and Miami. I like saving money, so I subscribed to Groupon, Livingsocial, and Google Offers for those cities. Those sites then promptly flooded my inbox with deals for gym membership, in-ear headphones, and¬†anti-cellulite treatment. Not useful. Going to each site and specifying my deal preferences took a while. Plus, if I found a deal that I liked, I had to copy-paste the link to that deal in another document so that I had it for future reference (in case I wanted to buy it later). Too many steps, too much hassle, unhappy email inbox.

Solution: So I wanted to build a site that scraped the fun/adventure deals automatically from these deal sites. Example use case: if a person plans to visit a new city (e.g. Los Angeles), he or she could just visit the site and see in one glance a list of the currently active adventure deals (e.g. scuba diving) in that city. Sure, it seems that aggregator sites like Yipit solve this. Almost all¬†aggregation¬†sites like Yipit require users to give them their email address before showing them any deals (most are also difficult to navigate). More¬†unnecessary¬†steps for the user. Plus, I found that the Yipit deals weren’t the same as the ones displayed on the actual Groupon/Livingsocial/Google Offer sites.

“pre” minimum viable product: I gathered feedback for my idea to see if other people besides me would actually use it. This time, I just made a few quick posts on reddit (in the city subreddits), and got many comments. People said they would use it. Next.

MVP: The site I built scrapes Livingsocial; Groupon generates its pages dynamically with ajax… can’t scrape that w/o a JS engine, a big PITA to set up. Google Offers didn’t have very many quality deals, and I thought I’d simplify by making the MVP only for Livingsocial for now.

Applying the Naive Bayes classifier

After scraping all the deals, they need to be classified as “adventure” or not. Obviously, doing this by hand is not scalable if I wanted to scrape deals for more than a couple cities. So I implemented the Naive Bayes classifier. Naive Bayes is often used in author text identification, e.g. finding out if Madison or Hamilton wrote certain¬†unidentified¬†essays in the Federalist Papers.

At a high level, Naive Bayes treats each “document” or block of text as a “bag of words”, meaning that it doesn’t care about the order of the words. When given a new “document” to classify, Naive Bayes asks and answers the question, “given each classification/category, what is the probability that this new document belongs to that classification/category?” The category with the highest probability is then the category that Naive Bayes has “predicted” the new document should belong to.

The site currently uses the deal “headline” (e.g. “Five Women’s Fitness Classes” or “Chimney Flue Sweep”) as the document text that Naive Bayes uses. I also tried using the actual deal description (i.e. the paragraph or two of text that Livingsocial writes to describe the deal), and from eyeballing the predictions, it looked like both gave similar prediction accuracy. Using the deal headline is a lot faster though.

Prediction accuracy is still pretty bad. I didn’t want Naive Bayes to automatically assign its predicted categories to the deals, so I decided to keep categorizing the deals manually, but with the help of Naive Bayes’s recommendations. I also decided to make its binary classification decisions more “fuzzy”. Here’s a screenshot of the admin page that tells me the predicted deal type of the scraped deals, with a column called “prediction confidence”, which is a score derived from the Naive Bayes output that signifies how strong its prediction is.

adrenaline_junkie_screen

No better way to learn than to do

Doing is the best way to learn, because working on your own projects forces you to engage in deliberate practice¬†(Cal Newport’s key to living a remarkable life). Not only do you practice your skills, but you also learn about learning: when faced with an obstacle while working on a personally initiated project, you have just you and your own resourcefulness–no boss telling you what to do or professor giving guidelines. For example, this time, I encountered the issue of my requests timing out when in production on Heroku, since Heroku has a max request time of 30 seconds and some of my requests were taking up to a few ¬†minutes (when my Naive Bayes implementation was inefficient). I googled my problem, found a stackoverflow post, and learned about worker queues and the Ruby library delayed_job, which fixed my problem by allowing more time intensive requests to be run in the background.

The site is at https://adrenalinejunkie.herokuapp.com/

What I made on a Sunday afternoon: Spreed, a speed reading Chrome extension

 

I was annoyed at something: I liked using the speed reading app at¬†http://www.spreeder.com/¬†to blaze through online content, but I hated all the clicks and copy-pasting needed to do it. When I’m annoyed at something, I see if I can solve it. So I decided to develop a simple speed reading Chrome extension. I had never developed a Chrome extension before (though it’s just javascript + html + css anyways), so this was also a great chance for me to try something new.

7 hours later, I submitted my extension to the Chrome web store. As I receive feedback over the next few weeks and refine my minimum viable product, I’ll post more lessons on what I’ve learned in the process.

Lessons from a “failed” web app: predictd.com

predictd_ss

Above is the graph of the number of visits per day to predictd.com, a timelapse stock trading simulator I developed about a year and a half ago. The two big spikes in traffic were when I posted predictd to reddit: once to the main site, once to r/investing, the investing subreddit. Traffic died down after each spike. The seemingly consistent visits during July was when I introduced predictd to the other interns at the hedge fund I worked at this summer: we would have competitions between each other to see who could make the most money.

predictd was not sticky. At least to the average internet user. People didn’t come back to the site weeks, months after they had discovered it. Perhaps it was fundamentally “anti-sticky”, due to the main feature which I had simple-mindedly thought a year and a half ago would make predictd sticky: the leaderboard. My original thought was that competition would bring people back to the site. But I noticed that people who did well got on the leaderboard and didn’t trade after that, preserving their leadership. People who didn’t do well stopped trading out of frustration. So no matter what, people didn’t come back to the site.

I realize now that, aside from predictd’s “anti-stickiness”, my target may have been wrong too. Over the summer I introduced predictd to the other interns I was working with. We were at a hedge fund, all working right next to each other, and so having mini trading competitions on predictd provided fun, relevant breaks. Perhaps implementing such a “tournament-style” or “contained” method of competition would’ve helped predictd’s traffic. Targeting¬†finance professionals, or at least people interested in finance/investing/stock trading, seemed to be a good idea too: the only consistent traffic came from my hedge fund colleagues, and the second spike in traffic (when site was submitted to the r/investing subreddit) surprisingly reached about the same magnitude of visits/day as the first spike (submitted to general reddit).

predictd.com still seems revivable. I’ll have to put it on the backburner though because I have other projects on my plate that I am pursuing in my journey to become a better and smarter web developer. It may have been a “failure” in terms of growth, but I definitely learned some important lessons. There are no failures, only learning experiences.