Im building a supercomputer anyone wanna help?

edited February 2015 in Everything else
hey guys,
I know its a bit off topic for the forum but I am trying to build a super computer and I have a gofundme running to fund the project. 

Once built I will be able to make it available to the community for gene-sequencing or any other mass calculation needs we may come across. 
Any assistance in funding or spreading the word from you all would be greatly appreciated!


  • Can't follow the link from where I'm at, but I'll take a look at it fer sho!
  • edited October 2014
    The user and all related content has been deleted.
  • Yeah they are pretty amazing little boards! just one of them is fairly powerful in its own right. its the cross between a processor and fpga and the epiphany that makes it so unique and amazing.  it really is a perfect platform for learning a ton of different things all packed in to a tiny little package.  

    Your idea for a distributed computing network is pretty cool as well it would not be too difficult to use BIONIC on a host server to run the whole show.  it could also be possible to use the power of a supercomputer as a compute server for many implanted devices so that they can offload heavy computations to the cloud and function essentially as a remote terminal but still give you amazing performance and abilities. this is similar to the service onlive.  
  • Once this thing is built I may have a few fun ideas for you to try
  • edited October 2014
    I wonder if we could more realistically set something like BOINC up for biohack. Assuming we could find a use for it.

    EDIT: More realistically since 10,000 dollars is quite a bit of money for people to give up with virtually no return for their dollar.
  • The user and all related content has been deleted.
  • I plan to build it should the crowd funding succeed or not, I just ordered the master node and the first 3 slave nodes this morning.  I am reaching out to a lot of different communities that I contribute to beyond as well hoping that it will work. If crowd funding works it cuts the build time down from years to weeks, but if not I will simply keep plugging away at it little by little. 

    Setting up BIONIC is really not hard at all. The hardest part is getting a static IP address simply because it costs money the rest can be done in an afternoon.

  • Honestly, I like the idea of having a super computer. But I'm not sure I understand what you're actually going to be doing with it. It's not like you can re-write proprietary software, and multi-threading already exists.

    If I'm totally misunderstanding your plan, please correct me.
  • edited October 2014
    This project is predominantly a research exercise at the beginning, each node contains a dual core CPU, an FPGA, and what is essentially an GPU. this means to fully understand how to maximize its output you must have a strong knowledge of all three and be able to assign tasks appropriately. and then it takes on an entire extra layer when you add additional nodes.

    To start, I want to explore building a new compiler that will take the regular serial type code and automatically find points of parallelism and separate them out to multiple nodes.  This could eliminate the need for people to change the way they code in order to take advantage of parallel computing. 

    Then I plan to work on writing new applications that actually take advantage of parallel computing. Right now there are very few things that actually do and the majority of them are scientific number crunches.  essentially every program you have ever used has to be re-written in order for the world to make the switch over to parallel.  I do not have to re-write proprietary software at all. but someone has to write a program that does the same functions as all of the proprietary software.  its essentially a lot of re-inventing the wheel, but this time we make it more round. 

    Right now I would guess that there are about 2000 people in the world that have every written a program for a super computer, so say someone wants to sequence the DNA of every person from and we have all make a BIONIC network so we can do it really fast. now there is an issue, there's only 2000 people with actual experience in writing a program that will do the task. so either someone has to learn it from the ground up, or pay big money to one of a very small group of people who already know. 

    Please note that multi-threading is not the same as parallel processing. Multi-threading is concurrent but not parallel. 

    As I stated above this is a predominantly educational project and will be for likely the first couple of years. but during that time its power will be contributed to help distributed computing efforts when it is not being actively used.  
  • What is the difference between your project and, say, Erlang?
  • I am not sure I understand what you are asking, erlang is a programming language and it is supported by this system.
  • 'Right now I would guess that there are about 2000 people in the world that have every written a program for a super computer'

    I find that really hard to believe, considering we have 3 super computers on campus and I personally know 5 people that use it daily....Plus all the crazy bitcoin ops from the past few years.  At any rate I plan on taking one of the courses on using super computers in the near future, so if I get any resources I'll put them to the wiki.

    If you need any help programming let me know.  Low level languages and I don't get along, but I've done it in the past.  Parallel programming is super intersting, but like you said you basically have to reinvent your wheels to get going.  I would suspect that there are some flavors of linux out there that would make it pretty easy to get started.
  • I could be way off, it was a guess.  Im sure there has been a pretty big insurgence since bitcoin came about but theres still only about 5 programs that have actually been written for mining plenty of forks though.

    I will let you know if I need any help for sure, I too am not so great with the low level languages but that's one reason why I am so excited to get in to this because it forces me to essentially learn everything from the hardware up.
  • I have very little experience with low level languages, but I do know someone who learned to program entirely in machine language. He might be able to help us some.
  • I have been doing some thinking about what would be a good place to start learning and defining the scope of how and what I plan to do to start learning all there is to learn with this whole parallel computing thing and I have come up with a goal to work towards. I want to attempt to prove/disprove the infinite monkey theorem.

    Here is a link for more info on the theorem:
    just in case anyone reading is not familiar with it.

    plan is to use a software defined radio to generate a random number
    from atmospheric noise. each node will query a random number between 0
    and 46. it will do this approximately 130,000 times (number of times
    will be the actual number of characters in Shakespeare Hamlet, which I
    still need to count accurately) each integer will be tied to a function
    of a typewriter key. The results will be dumped in to a text file
    which will be compared to a copy of hamlet by the head node.

    say that there is a probability of one in 3.4 × 10183,946 to get the
    text right at the first trial. But I would like to test if it would
    ever actually happen, And if it does happen (which I assume it will) how
    many attempts were made to get to that point.

    This is just a little sub project to give me something to work towards in my studies. But it is inherently complex and embarrassingly parallel in nature so it should be a good platform for learning parallel computing.

    If anyone notices any issues with my methodology, care to join me, or just cares to comment then I am all ears.

  • interesting.  Sounds like a decent way to learn at any rate.
  • edited October 2014
    If you can't raise the money for the computer you should build a network "super"computer and ask for biohackers to donate computing power to you.
    [email protected] uses BOINC for their project. It might be something worth looking into.

    edit: @IDPS already pointed this out.
    Also, I don't fully understand the problem your trying to solve. I get the Infinite Monkey Theorem (I would also be interested in testing this theory) and the problems around that (true random character), but I'm not sure I'm fulling understanding the computing problem you're trying to solve.

    edit 2:  You said  "Here's the issue: Nearly all of the programs we all use
    today are written to only use a single processor. This means that no
    matter how many processors you add, the program only uses ONE...." now when you processor do you mean physical CPU or cores inside a CPU? If you did mean the actual CPU then, how many people do you know that have more than 1 CPU in their computer? (Most people don't which is why developer don't build their software for it.) If you're talking about cores than it's already possible.
  • I was looking at bionic but I was hesitant to set one up unless I had something truly impact-full  to use it for but it doesn't seem to terribly difficult to do If anyone has something that needs that kind of computing power. also using the amazon servers woks well for a lot of things.

    I attempted to write my crowd funding page at a very low level to allows non-technical people to understand but it has caused a lot of questions with the techies. I am speaking about multiple physical CPU's.  Because we have already pumped away at about all the gains that can be had by increasing the clock rate of the CPU, the yearly amount of gain in computing power is slowing below the expected curve.  In order to re-stimulate gains in computing power computer design companies (AMD/Intel/etc.) are going to start implementing multiple CPU's on to a single motherboard. (In the form of SOC's of course)  this allows for greater computing power without cranking up the clock rate and generating additional heat and causing all of those headaches.

    if you look at the hardware that I am using it has two A9 processors as well as a co-processor of 16 RISC cores and an fpga per node. this allows for a lot of interesting scenarios when you cluster up a bunch of nodes. 

    Does this clarify your question?

    also while the infinite monkey theorem is something fun to play about with, there is truly no need to actually test it because we already know mathematically that it will occur because the is a finite number of combinations. It is just a fun platform for my research and a way to give myself a goal.

    as far as the random numbers go I have pretty much got that nailed down already. I am currently generating 5Mb/s of entropy on a single node and could expand that out to 80Mb/s per node using a usb hub quite easily. With the current 4 node cluster I have purchased (gotta love overtime pay)  and a little extra hardware I can achieve 320Mb/s of entropy.  I'm not sure how familiar you are with entropy but 5Mb/s is a pretty ridiculous amount by itself bu 320Mb/s is just a comically stupid amount of entropy.  for perspective a million dollar quantum vacuum noise entropy generator will get you 2Gb/s so I can generate a crazy amount of entropy for the money spent.
  • The user and all related content has been deleted.
  • The point is finding patterns in something which, by definition, has no pattern. Just for funzies.
  • @Osteth do you really think we'll be seeing SoCs anytime soon in desktop computing? Current SoCs don't have the power that a desktop CPU has. Also the decline in yearly gain in computing power isn't a technological problem, it's an economic problem. The majority of computer users only need a 4 core (single threaded) CPU with 6-8 GB of RAM. There's no real intensive for AMD/Intel to build super powerful CPUs; both companies have and both companies aren't really seeing significant on those CPUs because it's not worth the cost. I could occasionally find use for such power, but it's not worth it to spend $1000 for a CPU IMO. 
  • @otptheperson there is not much need for quite that much entropy really but it is really useful for things like encryption, cryptocoin mining and other stuff that needs random numbers also it is useful to have a big pool of entropy to pull from if you are running a big server with a lot of VM's because VM's generate no entropy of their own. 

    @jamcar23 I can pretty much guarantee SOC's are coming to desktop computers. They are cheaper to produce, use less power, create less heat, and eliminate 90% of the motherboard dropping the overall cost of the computer significantly. 
    also AMD has bowed out of the CPU game already. They are transitioning to APU's, GPU's and I believe SOC's.

    It is also likely that desktop computers will become very small terminals that are essentially just handle IO. that Large Bulk of computing will be handled on the server side and instead of purchasing programs you will rent server time instead. this allows companies to essentially completely defeat software piracy, more easily control the user experience, and generate more profit from their software while lowering the cost of entry. 

    you can already see this happening with things like, adobe cloud and Nvidia has something for graphics rendering but I cant remember what its called. 

    This is all around good for the big companies but it will make it nearly impossible for new start-ups to get in to the game, strips you of ownership of your copy of the software, and get crams you in to this little box of doing exactly what they want you to do instead of doing whatever you want because you wont even have the requisite computing power to run a basic program on your home workstation.  You know basically all the same things were seeing with net neutrality... 
Sign In or Register to comment.