Once you have gotten access to a supercomputer, you will want to run your model code on it as soon as possible! Below, I’ve listed a few steps/issues I’ve run into when setting up my NEURON code on a new supercomputer. The first two (installing NEURON, writing a job submission script) can be avoided by using the Neuroscience Gateway portal, aka the NSG. The NSG is a great option for getting started with large scale, parallel NEURON modeling – you can get a free account with some computing time right away, and it is well supported.
If you decide not to use the NSG, or if you like it but decide you want to scale up your project even more, then you will have additional considerations for using NEURON on another supercomputer. Here are some solutions that may be useful for you: Continue reading
NEURON code can easily be run in parallel (on many processors at once), saving you a large amount of time. This applies to both large-scale network simulations and very small simulations that you need to run hundreds of times to explore a parameterspace. To run NEURON in parallel, you need access to a computer with multiple processors or cores. Here are some ideas of how to find one. Continue reading
As mentioned in a previous post, you can compile frequently used functions to save time when running your NEURON code. For my code, this made a huge difference when it came to connecting the model cells. The process used to be coded in hoc, and it took hours. Now it takes about a minute in NMODL for my largest model size. Continue reading
I use NEURON for my neural simulations and I recommend it. Not only is it well documented, but it is consistently well supported – Michael Hines and Ted Carnevale (NEURON developers) are always quick to respond to questions on the NEURON forum or by email. NEURON has been used in well over a thousand publications.
Most every programmer is interested in having their program execute as quickly as possible. This concern is held by computational neuroscientists as well. Most important is your time: the faster you can get results, the faster your research progresses and the sooner you can get published. But computing time is also an issue – people often run their simulations on shared machines and can only get (or afford) a limited amount of computing time. Continue reading
I’m a big fan of subscribing to academic journals’ tables of contents in RSS form. It allows for quickly browsing all the recently published articles and flagging ones of interest for further reading.
I also depend on a reference manager to keep my articles of interest organized and accessible. Plus, it makes bibliography generation so much faster!
But it’s hard to find an RSS reader that plays well with an online reference manager. Read on for my solution for importing journal articles from feedly to Mendeley. Continue reading