I think the point here is “Why simulate when you can get an exact answer?” In which case, the consideration is whether it is easier to ‘see’ that the simulation program is correct or that the reasoning for the exact answer is correct.
A similar situation that comes to mind is “exact” symbolic integration vs. “approximate” numerical integration; symbolic integration is not always possible (in terms of “simple” operations) whereas numeric integration is straightforward to perform, no matter how complex the original formula, but inexact.
Yes—while reasoning through the problem might give you a deeper understanding, if you just want to know the answer it can sometimes be easier to be sure that your program is correct than that your mathematical reasoning is correct.
I ran a bunch of trials where I randomly chose floating point values A and B from the interval [1, 1000]. Then I either added A to itself B times or added B to itself A times. Then I took an average of all the sums, weighting each by the “relevance factor” (5/A)(7/B).
I know this was trying to be funny, but that algorithm didn’t really use simulation to estimate 7 x 5. It just calculates 7 x 5 a bunch of times and takes the average, with the added step of multiplying and dividing by AB.
But then, I’m maybe not creative enough to come up with an algorithm that would actually output an approximation of 7 x 5 using some probabilistic method that doesn’t include calculating 7 x 5.
#!/usr/bin/python
from random import random
trials=200
hits=0
for i in range(trials):
x=random()
y=random()
if x<.7:
if y<.5:
hits+=1
print 100*(float(hits)/float(trials))
Yeah, I guess I should have made the effort to understand the principles of the subject I was reading about rather than do a random trivial programming exercise with no general applicability whose dominance by simple mathematics I could have predicted a priori.
I ran a bunch of trials where I randomly chosen floating point values A and B from the interval [0, 1000]. Then I either added A to itself B times or added B to itself A times. Then I took an average of all the sums, weighting each by the “relevance constant” (5/A)(7/B).
I recently ran a quick simulation to estimate the answer to 7 x 5. In case anyone is wondering: it’s 35.000.
I think the point here is “Why simulate when you can get an exact answer?” In which case, the consideration is whether it is easier to ‘see’ that the simulation program is correct or that the reasoning for the exact answer is correct.
A similar situation that comes to mind is “exact” symbolic integration vs. “approximate” numerical integration; symbolic integration is not always possible (in terms of “simple” operations) whereas numeric integration is straightforward to perform, no matter how complex the original formula, but inexact.
∫(0 to 7) 5 dx ≈ 35.000
Yes—while reasoning through the problem might give you a deeper understanding, if you just want to know the answer it can sometimes be easier to be sure that your program is correct than that your mathematical reasoning is correct.
I’m curious: how do you estimate the product of seven and five?
I ran a bunch of trials where I randomly chose floating point values A and B from the interval [1, 1000]. Then I either added A to itself B times or added B to itself A times. Then I took an average of all the sums, weighting each by the “relevance factor” (5/A)(7/B).
I know this was trying to be funny, but that algorithm didn’t really use simulation to estimate 7 x 5. It just calculates 7 x 5 a bunch of times and takes the average, with the added step of multiplying and dividing by AB.
But then, I’m maybe not creative enough to come up with an algorithm that would actually output an approximation of 7 x 5 using some probabilistic method that doesn’t include calculating 7 x 5.
Throw darts at a unit square, take the fraction that hit a point (x < .7, y < .5) and multiply by 100. (Also works to calculate pi.)
I get 36.0.
It’s samping variation. Set trials = 1e9 and see what you get.
If I knew how to take fractions, I would have just done 7/(1/5).
Yeah, I guess I should have made the effort to understand the principles of the subject I was reading about rather than do a random trivial programming exercise with no general applicability whose dominance by simple mathematics I could have predicted a priori.
I ran a bunch of trials where I randomly chosen floating point values A and B from the interval [0, 1000]. Then I either added A to itself B times or added B to itself A times. Then I took an average of all the sums, weighting each by the “relevance constant” (5/A)(7/B).