Im building a supercomputer anyone wanna help?
hey guys,
I know its a bit off topic for the forum but I am trying to build a super computer and I have a gofundme running to fund the project.
Once built I will be able to make it available to the biohack.me community for gene-sequencing or any other mass calculation needs we may come across.
Any assistance in funding or spreading the word from you all would be greatly appreciated!
Tagged:
Comments
EDIT: More realistically since 10,000 dollars is quite a bit of money for people to give up with virtually no return for their dollar.
If I'm totally misunderstanding your plan, please correct me.
I find that really hard to believe, considering we have 3 super computers on campus and I personally know 5 people that use it daily....Plus all the crazy bitcoin ops from the past few years. At any rate I plan on taking one of the courses on using super computers in the near future, so if I get any resources I'll put them to the wiki.
If you need any help programming let me know. Low level languages and I don't get along, but I've done it in the past. Parallel programming is super intersting, but like you said you basically have to reinvent your wheels to get going. I would suspect that there are some flavors of linux out there that would make it pretty easy to get started.
I will let you know if I need any help for sure, I too am not so great with the low level languages but that's one reason why I am so excited to get in to this because it forces me to essentially learn everything from the hardware up.
Here is a link for more info on the theorem: http://en.wikipedia.org/wiki/Infinite_monkey_theorem
just in case anyone reading is not familiar with it.
My
plan is to use a software defined radio to generate a random number
from atmospheric noise. each node will query a random number between 0
and 46. it will do this approximately 130,000 times (number of times
will be the actual number of characters in Shakespeare Hamlet, which I
still need to count accurately) each integer will be tied to a function
of a typewriter key. The results will be dumped in to a text file
which will be compared to a copy of hamlet by the head node.
calculations
say that there is a probability of one in 3.4 × 10183,946 to get the
text right at the first trial. But I would like to test if it would
ever actually happen, And if it does happen (which I assume it will) how
many attempts were made to get to that point.
This is just a little sub project to give me something to work towards in my studies. But it is inherently complex and embarrassingly parallel in nature so it should be a good platform for learning parallel computing.
If anyone notices any issues with my methodology, care to join me, or just cares to comment then I am all ears.
[email protected] uses BOINC for their project. It might be something worth looking into.
http://boinc.berkeley.edu/
http://boinc.berkeley.edu/trac/wiki/CreateProjectCookbook
edit: @IDPS already pointed this out.
Also, I don't fully understand the problem your trying to solve. I get the Infinite Monkey Theorem (I would also be interested in testing this theory) and the problems around that (true random character), but I'm not sure I'm fulling understanding the computing problem you're trying to solve.
edit 2: You said "Here's the issue: Nearly all of the programs we all use
today are written to only use a single processor. This means that no
matter how many processors you add, the program only uses ONE...." now when you processor do you mean physical CPU or cores inside a CPU? If you did mean the actual CPU then, how many people do you know that have more than 1 CPU in their computer? (Most people don't which is why developer don't build their software for it.) If you're talking about cores than it's already possible.
I attempted to write my crowd funding page at a very low level to allows non-technical people to understand but it has caused a lot of questions with the techies. I am speaking about multiple physical CPU's. Because we have already pumped away at about all the gains that can be had by increasing the clock rate of the CPU, the yearly amount of gain in computing power is slowing below the expected curve. In order to re-stimulate gains in computing power computer design companies (AMD/Intel/etc.) are going to start implementing multiple CPU's on to a single motherboard. (In the form of SOC's of course) this allows for greater computing power without cranking up the clock rate and generating additional heat and causing all of those headaches.
if you look at the hardware that I am using it has two A9 processors as well as a co-processor of 16 RISC cores and an fpga per node. this allows for a lot of interesting scenarios when you cluster up a bunch of nodes.
Does this clarify your question?
also while the infinite monkey theorem is something fun to play about with, there is truly no need to actually test it because we already know mathematically that it will occur because the is a finite number of combinations. It is just a fun platform for my research and a way to give myself a goal.
as far as the random numbers go I have pretty much got that nailed down already. I am currently generating 5Mb/s of entropy on a single node and could expand that out to 80Mb/s per node using a usb hub quite easily. With the current 4 node cluster I have purchased (gotta love overtime pay) and a little extra hardware I can achieve 320Mb/s of entropy. I'm not sure how familiar you are with entropy but 5Mb/s is a pretty ridiculous amount by itself bu 320Mb/s is just a comically stupid amount of entropy. for perspective a million dollar quantum vacuum noise entropy generator will get you 2Gb/s so I can generate a crazy amount of entropy for the money spent.