When I’m writing code in an area which I don’t really understand, I often write exploratory code that uses way more resources than a polished solution would, but it’s not code I’m writing instead of polished code—I’m writing that code because it will help me learn enough to write polished, elegant code later.
Solving a problem with brute force is usually a faster path toward a clean solution than refusing to implement any solution until you’ve thought of the best one.
But you don’t start out trying to solve the problem in a hilariously inappropriate way. For example, if your boss said, “hey, sort these 10 billion numbers” you wouldn’t do simulated annealing with a cost function that penalizes unsorted entries, and then just make random swaps in the data and tell your boss to come back in 10 years when it will only probably be finished with an only probably correct answer. That’s a categorical waste of resources, not a strategic upping of resources to get a first, but still reasonable, attempt that you can then whittle into something better.
As a machine learning researcher, my opinion is that Watson is more like simulated annealing. It’s like someone said, “Hey how can we make this thing play jeopardy without even thinking at all about how it will do the data processing… how large do we have to make it if its processing is as stupid and easy to implement as possible?”
When I’m writing code in an area which I don’t really understand, I often write exploratory code that uses way more resources than a polished solution would, but it’s not code I’m writing instead of polished code—I’m writing that code because it will help me learn enough to write polished, elegant code later.
Solving a problem with brute force is usually a faster path toward a clean solution than refusing to implement any solution until you’ve thought of the best one.
But you don’t start out trying to solve the problem in a hilariously inappropriate way. For example, if your boss said, “hey, sort these 10 billion numbers” you wouldn’t do simulated annealing with a cost function that penalizes unsorted entries, and then just make random swaps in the data and tell your boss to come back in 10 years when it will only probably be finished with an only probably correct answer. That’s a categorical waste of resources, not a strategic upping of resources to get a first, but still reasonable, attempt that you can then whittle into something better.
As a machine learning researcher, my opinion is that Watson is more like simulated annealing. It’s like someone said, “Hey how can we make this thing play jeopardy without even thinking at all about how it will do the data processing… how large do we have to make it if its processing is as stupid and easy to implement as possible?”
See my other comment for more on this.