Category Archives: startups

Tradervue launches today!

Well, it’s been a while in the making, but today I’m very excited to announce the launch of Tradervue, a web application for active traders!

When I left my full-time position at NewsGator about a year and a half ago, I started actively trading equities intraday. Yep, one of those day traders. I was thinking “I’m an engineer, how hard can this be?” Ha! Turns out it was harder than I thought.

I spent some time searching for a trading methodology that worked for me, and one that specifically worked for my personality. I like instant gratification – I often use overnight shipping when I order things, I hate that my TV takes 10 seconds or so to warm up, and I like trading during the day where I’m not subject to the whims of the market overnight when I can’t do much about it.

I eventually settled into a rhythm, with the help of many smart traders I met online, where I was trading actively moving stocks that had some catalyst for moving that day (earnings announcement, fresh news, etc.), and I would watch the order flow and do my thing. I worked pretty hard at it – I was at the screens an hour before the market opened, would trade most of the day, and then a few hours of prep work at night for the next day.

I also kept a trading journal in Pages (a word processor), where I would write down why I was making certain trades, how I was feeling about it at the time (confident, anxious, etc.), and I’d paste in order execution data and charts from my trading platform at the end of the day. I’d review this journal at the end of the week, and try to learn from my successful and not-so-successful trades. All in all, this was one of the best tools I had for understanding my trading.

But I hated keeping it.

I didn’t mind writing in it – why I was taking a trade, what was making me nervous about it, etc. That part was easy, and pseudo-creative work. What I hated was having to paste in my execution data, and pasting charts into it from my trading platform. It ended up being about an hour of busy-work at the end of every trading day. Once I even caught myself not taking a quick trade because I didn’t want to add even more work to my after-close routine. Obviously not good; my very best tool for improving my trading was becoming so onerous it was discouraging me from trading.

On the advice of many experienced traders, I also made a list of trading goals for the year. For 2011, two of my non-P&L-related trading goals were a) continue keeping my trading journal, because I was learning a lot from doing it, and b) come up with a way to objectively analyze my data to understand strengths and weaknesses that might not be obvious. For the second item, my hope was to find a product that would just work for me; I looked around for a while, but never found anything that “clicked.”

So with these two things in the back of my mind, I set to work to build something, just for myself, to address them. Find a way to write in my journal, but have the busy work be automated. Find a way to load all of my trading data, and show me views of it I haven’t seen before. Show me what’s working. And show me what’s not.

As I was building this, somehow I got distracted and decided to refocus a bit, and build a web application that could do this for everyone. And so was born Tradervue.

As Tradervue was taking shape, in the back of my mind I was thinking about the trading community I had learned a lot from, and the traders that actively share their ideas online on Twitter, StockTwits, and their blogs. What I have rarely seen is traders sharing actual trades. I don’t mean the sensitive data like how many shares were traded, or how much money was made – that’s not important. Rather, things like where did you enter this trade? How did you get in when it popped through the price level you were watching, but then dropped 0.50 before moving higher? When did you start to sell? Questions like that. Execution is everything – and so perhaps show people how you executed.

As I thought more about this, I noted that Tradervue had all of the data necessary to share trades. The challenge was more a matter of figuring out specifically what information should be excluded and kept private, and then make it easy to share the more educational parts. Shouldn’t it just be a click or two to share a trade with the community, complete with charts and commentary? I thought so.

So I built the sharing into Tradervue. And combined with the trading journal capabilities (with generated charts) and the analysis it can do, allowing you to drill down as far as you want, I think it’s a pretty cool product.

There were beta users whose average session length was measured in hours, with no more than a few minutes idle during that period. It was quite amazing, and exciting; I’m even more excited to see where it goes from here.

So, happy birthday to Tradervue – today is its day!

Choosing a colocation vendor – power and cooling (part 3)

In part 1, we had a quick overview of things to think about when looking at colocation vendors and data centers. In part 2, we looked into your network and your bandwidth usage. Today, we’ll talk about power.

I pretty much ignored this when I first moved into a top-tier facility. I assumed if I was leasing one rack of space, they’d connect up enough power for me to fill that rack up with servers, and I wouldn’t need to worry about it.

In reality, that was far from the truth. Power is the most important thing to think about in a data center.

Let me say that one more time, just in case you missed it. Power is the most important thing to think about in a data center. And you should spend some quality time thinking about it.

Power density

Here’s a scenario, which, uh, may or may not be based on a true story. You’re growing fast. You need servers, pronto. And you want to make sure you have enough room to scale to more servers. So you call IBM, and the sales guy comes in with small army and they start talking up their blade servers. And you just about fall over…not only do they look cool, but you can fit 14 blades in one 7U chassis! Six of those will fit in a rack, and you do some mental math, thinking about how many processor cores you can fit in those racks you just leased at your hosting facility. Growth problem solved. You’re almost ready to pull the trigger.

You just need to make sure you have enough power outlets in your rack to plug these babies in. You call your host’s engineer on the phone, bragging about how many processor cores you’re going to have, and has he heard about the new Binford SS-9000 blades with the 10-core processors? And you hear him tapping away on his keyboard in the background, and you’re just not getting the excited reaction from him you were hoping for.

Then he points out that those blade enclosures you are talking about have four 2300W power supplies each. Six enclosures. That’s 55 kW. And while you’re unlikely to actually use 55kW at any given moment, even say 25 kW would be a lot.

No problem, you think. I’ll just keep adding power circuits as I keep adding blade enclosures, as my business grows. What’s a few bucks for power when I’m getting 140 cores in 7U? This is gonna be great.

But over the course of the next half hour, as he explains the data center’s version of the birds and the bees to you, you start to see the light. See, the problem isn’t whether or not they can drag in enough power cables into your racks. They can.

The problem is cooling.

Every data center has some amount of power they can dissipate per square foot. It’s essentially the capability of the cooling systems they have. Some facilities are more efficient than others, but everyone has a number. And you can’t go over it. So you can have that rack that’s got 6 enclosures full of blades if you want it – but you might have to have, say, 100 sq. ft. of empty space around it to ensure it can be cooled. And yes, you’re going to pay for that space!

In real life, you will often end up leaving empty space in some of your racks for just this reason.

So now you know. Power density is one of the most important things you need to think about as you’re planning your data center future. Talk to your hosting company’s engineers about it; you’re not the first one to have questions about it, and it’s going to affect you. If not now, later.

Multiple power circuits

On its way to your rack, your power has to come through a bunch of distribution equipment. Someday there could be a failure in some piece of equipment, out of your control. So, if you are allowed to (and you should be), you should have power from multiple independent circuits coming into your equipment.

Mission-critical servers that will cause you pain if they go down, like database servers and such, should probably have multiple power supplies, and you should make sure each server is plugged into at least two independent circuits.

Other more failure-tolerant equipment (or as I like to call them, “disposable”) could be plugged into a single circuit each. So if you have 10 web servers, maybe plug 5 of them into each circuit.

But here’s the important part about this. Do some modeling as to what exactly will happen if you lose a power circuit. Depending on what’s connected, and how, it’s likely that your other circuit will need to absorb some of the load from the circuit that went down, and is now drawing more current. Make sure to model these loads (call your equipment manufacturers if you need more data from them), and understand what will happen in different power scenarios.

And while I’m thinking of hard-learned tips, here’s another one to think about while you’re doing your power modeling – don’t load your power circuits up anywhere near their max capacity. Something like 50% would be more reasonable.

At one time we were living on the edge in terms of power in one particular rack, because we didn’t do this modeling. For whatever reason we ticked over the max on one circuit, and a circuit breaker tripped, taking it down. This caused a small surge on the other circuit due to the power supplies connected to it having to work harder. (can you guess what’s coming?) And then, you guessed it, this additional load on the second circuit caused its circuit breaker to trip. Two circuits, two breakers, no power.

Do your modeling. It’s not optional.

Choosing a colocation vendor – network and bandwidth (part 2)

In part 1, I gave a quick overview of some things to think about when choosing the data center vendor you want to colocate with. Today, we’ll talk about one particular topic, the network and bandwidth.

At a high level, there are three parts of your network: the external connection to the internet, your internal network inside your own firewall or router, and the connection between those two networks.

Other than research, there isn’t a lot you can do about the external connection to the internet. But if you go with a top-tier hosting company, and you should, then they know what they’re doing, they’re really good at it, and they can keep that connection up. You should learn about it, and what kinds of connections they have, the available bandwidth, redundancy, and all that, but once you are comfortable you can worry less. Just make sure your growth projections don’t end up with you using an appreciable percentage of their total bandwidth. Also ask about things like their capabilities around things like denial-of-service attacks and such; they should be able to tell you if and how they can help if you run into network-related issues like that.

If you think you will eventually need to be hosted in multiple facilities, and your vendor has suitable ones, then ask about interconnections between those facilities. In many cases you will find they have private connections between their data centers, so ask about the available bandwidth and how that is priced.

On the opposite end of things is your own internal network that you design. I won’t touch on that, as only you know your requirements, and your network folks can do a great job there.

It’s the middle part that’s worth thinking about. Your firewall (or router) is connected to some piece of equipment that’s managed by your host. Sooner or later, that equipment needs to have a firmware update, or a new power supply, or something else. It will go down, if only for maintenance; everything does eventually. So think about the effect of that. If downtime is not acceptable, then make sure you can get (at least) two ethernet connections, connected to different equipment on their end. And on your end, you’ll need your own equipment which is capable of managing and failing over between those connections as necessary (two firewalls, for example, with a failover mechanism).

And on the topic of bandwidth, you’re a grown-up now in terms of hosting. You don’t buy 500GB of “bandwidth” per month any more; you pay for usage in megabits per second (Mbps). So if your system sustains 30Mbps most of the time, that’s what you will contract for. Your host will measure your sustained bandwidth, probably at the 90th or 95th percentile, and charge you overages if you go over your contracted amount. You may also be able to specify burst capability; so for example maybe you contract for 30Mbps sustained and 100Mbps burst, and they will cap your max bandwidth at your burst rate.

So that’s all pretty straightforward, and probably not a lot of surprises there for the startup CTO. Tomorrow, we’ll talk about power and cooling in part 3 – and that’s the part you definitely don’t want to miss.

Choosing a colocation vendor – overview (part 1)

There’s been a lot of talk the last few days about Facebook’s Open Compute project, where they have published info about their servers and data centers. It’s interesting reading. But, arguably, not specifically relevant for many folks.

Say you’re a startup. You’ve built the next great thing, you’ve got a few beta customers, and it’s time to get ready for real use. You’ve dabbled with shared hosting, and have vowed to never do that again. You’ve thought about virtual private servers, dedicated/managed hosting, and cloud services like EC2 or Windows Azure. But in the end, you’ve decided you’d rather own and operate your own servers. So you need a home for them.

Unfortunately, there’s not a ton of guidance out there for you at this point. There is a lot of superficial advice that Google will point you to, but not a lot that’s very useful for the startup CTO. What should you look for in a data center?

This article will talk about some high-level things to think about. Parts 2 and 3 will dig in further into what’s really important, and how to plan for the future.

You get what you pay for

Way back in 2002-ish or so, before I started NewsGator, I had a couple of servers at a mom-and-pop colo shop. I think it was like $50/mo or something for a couple of servers – just insanely cheap. It was basically a small office suite, with maybe 5 racks of equipment. It seemed a little warm in there, and the only security was the lock on the suite door (meaning I could get to other folks’ servers). I wasn’t there long, but a friend was – he told me a lot of stories about his servers overheating, and even one time when his servers were down and unreachable for a whole weekend without any notice. He later found that the company had put the entire rack he was in (shared with others) in a truck and hauled it across town to a new facility. Yikes.

What do I need?

You really want to find a company you trust; someone you can call a partner. You’re putting a lot of faith in them, so choose wisely. If you can afford it, go with a top-tier hosting facility in your area. If you can’t afford it, think harder about whether you really want to colocate. A few high-level things to think about:

  • physical security – who can get into the room your servers are in? How? Once they are in the room, could they get to your servers?
  • network – make sure you’re comfortable with the internal and external connections to the net, and how traffic is managed in the event of something going down.
  • bandwidth – think hard about what you need, both in terms of sustained and burst bandwidth.
  • physical space – is there room for you to expand as you grow?
  • cooling – your equipment doesn’t work if it gets too hot.
  • power – can they bring in enough power for your equipment? What happens if there is a grid power outage? How often to they test their generators and other power equipment? Have they ever had an actual power outage?
  • maintenance – sooner or later they’re going to have to do maintenance on, say, the router you’re connected to.
  • SLA – they can’t guarantee your site will be up, but they can guarantee that you will have power and network access. What does the service level agreement say?
  • company – is the company in good financial health? You don’t want to have to make an unscheduled move.

Now, when you look at a list like this for the first time, you’re probably thinking things like network/bandwidth are the most important to worry about. I did. I spent a lot of time worrying about that. But what I found was that ended up being the least of my worries.

This has all been pretty high-level so far…but in part 2 tomorrow, we will look at network and bandwidth; then in part 3, we’ll talk about the big daddy of them all, power and cooling.

Unexpected Benefits

A couple of weeks ago, I read this post from entrepreneur Daniel Jalkut (inspired by these posts) about entrepreneurship, ambition, and whether to keep your startup small and just you, or grow it into something much bigger but with different responsibilities.

When I started NewsGator, it was just me, plus some outside help from whoever I could talk into having a burrito with me (you know who you are!). For the most part it stayed that way for a year or so, I enjoyed it immensely, and it had become what the cool kids are calling a lifestyle business. But for me, personally, I had bigger plans, and took on outside investment in order to build it into something larger, with a scope so much wider that I just couldn’t do it all myself.

Now, the company is about 100 people, and I’m tremendously proud of what it has accomplished.

But somewhere along the way, something touched me in a profound way. One holiday season we had a week where everyone had an envelope with their name on it stuck to the wall in one part of the building. The idea was to write anonymous notes to other folks, presumably saying nice things about them, or thanking them for something they’ve done. At the end of the week, everyone took their own envelope and read the things people had said. In my envelope was a note I will never forget. Paraphrasing:

“Thanks for starting this company. Because of NewsGator, my family has been able to afford a home.”

I’m not one to get all sentimental, but reading this note choked me up.

Up to that time, I had spent a lot of time thinking about we were affecting our customers’ lives. But I hadn’t really thought a lot about how we were profoundly affecting our employees’ lives as well.

An unexpected benefit of starting and growing a company.