MECP2 regulates cortical plasticity underlying a learned behavior in adult female mice

http://www.biorxiv.org/content/early/2016/02/28/041707

, , , ,

Advertisements

Functional Genetic Screen to Identify Interneurons Governing Behaviorally Distinct Aspects of Drosophila Larval Motor Programs

http://www.biorxiv.org/content/early/2016/02/28/041061

 

, , , ,
Drosophila larval crawling is an attractive system to study patterned motor output at the level of animal behavior. Larval crawling consists of waves of muscle contractions generating forward or reverse locomotion. In addition, larvae undergo additional behaviors including head casts, turning, and feeding. It is likely that some neurons are used in all these behaviors (e.g. motor neurons), but the identity (or even existence) of neurons dedicated to specific aspects of behavior is unclear. To identify neurons that regulate specific aspects of larval locomotion, we performed a genetic screen to identify neurons that, when activated, could elicit distinct motor programs. We used 165 Janelia CRM-Gal4 lines chosen for sparse neuronal expression to express the warmth-inducible neuronal activator TrpA1 and screened for locomotor defects. The primary screen measured forward locomotion velocity, and we identified 63 lines that had locomotion velocities significantly slower than controls following TrpA1 activation (28oC). A secondary screen was performed on these lines, revealing multiple discrete behavioral phenotypes including slow forward locomotion, excessive reverse locomotion, excessive turning, excessive feeding, immobile, rigid paralysis, and delayed paralysis. While many of the Gal4 lines had motor, sensory, or muscle expression that may account for some or all of the phenotype, some lines showed specific expression in a sparse pattern of interneurons. Our results show that distinct motor programs utilize distinct subsets of interneurons, and provide an entry point for characterizing interneurons governing different elements of the larval motor program.

Learning In Spike Trains: Estimating Within-Session Changes In Firing Rate Using Weighted Interpolation

http://www.biorxiv.org/content/early/2016/02/26/041301

 

, ,
The electrophysiological study of learning is hampered by modern procedures for estimating firing rates: Such procedures usually require large datasets, and also require that included trials be functionally identical. Unless a method can track the real-time dynamics of how firing rates evolve, learning can only be examined in the past tense. We propose a quantitative procedure, called ARRIS, that can uncover trial-by-trial firing dynamics. ARRIS provides reliable estimates of firing rates based on small samples using the reversible-jump Markov chain Monte Carlo algorithm. Using weighted interpolation, ARRIS can also provide estimates that evolve over time. As a result, both real-time estimates of changing activity, and of task-dependent tuning, can be obtained during the initial stages of learning.

A watershed model of individual differences in fluid intelligence

http://www.biorxiv.org/content/early/2016/02/26/041368

, , , , ,
Fluid intelligence is a crucial cognitive ability that predicts key life outcomes across the lifespan. Strong empirical links exist between fluid intelligence and processing speed on the one hand, and white matter integrity and processing speed on the other hand. We propose a watershed model that integrates these three explanatory levels in a principled manner in a single statistical model, with processing speed and white matter figuring as intermediate endophenotypes. We fit this model in a large (N=562) adult lifespan cohort of the Cambridge Centre for Ageing and Neuroscience study (Cam-CAN) using multiple measures of processing speed, white matter health and fluid intelligence. The model fit the data well, outperforming competing models, providing evidence for a many-to-one mapping between white matter integrity, processing speed and fluid intelligence, and can be naturally extended to integrate other cognitive domains, endophenotypes and genotypes.

Somebody explain to me again why we have journals

Sometime in the early 1990s, an influential underground record producer named Steve Albini wrote an infamous piece for the punk rock magazine Maximum Rock n’ Roll titled “The problem with music.” In it he outlined what a lousy deal you got being a moderately successful band on a major label. It really was quite shocking to see how the record industry managed to operate with a business model that was so incredibly unfavorable for its content producers. With a reasonable accounting example, Albini showed how a band could sell 250,000 records and make more than $3 million for the industry, but somehow end up $14,000 in the hole. This is because most expenses associated with making the record and touring were recoupable from the band’s 10-12% cut of the retail CD price. This left the record company and retailers nearly 90% of the gross revenue from selling a piece of plastic that cost about two dollars to make. At the time, CDs were the state of the art throughput for delivering your content to the public, so most bands went along with it or at least aspired to. Plus, the endorsement of a major label  brought with it prestige and an acknowledgment that your band was making top-quality music! Right?

If you’re laughing at the foolishness of this arrangement, academic scientists, you might want to stop and think about that.

Throughout the 1990s, the record industry made a truly insane amount of money selling those pieces of plastic. Then, with the rise of the Internet and digital delivery mechanisms, it became less and less attractive to own them. You see, it turns out that the plastic was just a delivery mechanism for the actual content that gave the plastic any value at all. When was the last time you listened to a physical CD? For many of you, I suspect it’s been some time, surely even longer since actually bought one. Have you stopped listening to music? You are hopefully developing a sense that it might be reasonable to ask, “Why are record companies still a thing?”

*********

So if it isn’t obvious where I’m headed with this, let’s examine the parallels with the business model of journals. Scientists are content producers, the bands if you will, that the journals are providing a distribution mechanism for. Like the bands, we bore the burden of funding the creation of the content, and the journals are selling the pieces of plastic, er… hard copies and reprints. But it’s so much more strange than that! We are, incredibly, also unpaid review staff interns and the sole customers on the front end when we pay them to disseminate our work and on the back end when we pay them again to access that work. Understand that in the digital age, journals are offering the scientific community content generated by us that we would be honored to let people read and discuss for free, and a judgment about the soundness and importance of the content that we also regularly provide free of charge. Sometimes to the chagrin of the recipient. They also offer copy editing and page layout services.

“Why are there so many journals?”

Does this really make sense to anyone? There was certainly a time when journal identity and branding mattered, when one looked forward to flipping through the pages of a favorite periodical to find works of like-minded scientists. But let’s be honest. The proliferation of journals is really out of control. When you let go of the romanticism of the above scenario, does anyone seriously think there needs to be hundreds of neuroscience journals. I can think of two very cynical reasons that this is the case. Cynical reason #1: Rampant journal speciation allows publishers to generate more “products” to bundle and sell to our institutional libraries, despite the reality that scarcity of journal pages is entirely illusory. This is exactly like printing more money: Temporarily generating wealth with diminishing returns due to devaluation of the unit worth. Cynical reason #2: Scientists, also laboring under the illusion that pages are a finite resource, want more chances to roll the peer review dice so they can be assured that every paper finds a home. But we’ll get back to those fickle dice later.

“Why is there more than one journal?”

OK so if we agree that there don’t need to be hundreds of neuroscience journals, then why does there need to be more than one neuroscience journal? Why does there need to be more than one biology journal? Well for starters there are cynical reasons #1 and #2 of course. But what about how you find stuff? Given the search tools and aggregators available now, and given the complex and multidisciplinary eclecticism of modern biology, the notion that one could or should stick to reading a handful of journals to get all the science they need to know seems hopelessly quaint. Good luck with that. Do I need to point out that omnibus journals like eLife and PLOS One have sections? So does bioRxiv.

Noooo… the reason there is more than one journal comes down to one basic question:  “How will I know what to care about/what to read/who to hire/what to believe/how to stand out?” Is there anyone among you who is willing to say that they look to the journal title to  decide whether a piece of work is believable and/or interesting? To anyone who raised their hand, with all due respect, I’m going to gently suggest that you are definitely a complete chump and possibly not really a scientist.

“Why is there a need for journals at all?”

I will admit that this is a difficult one, because the current system is all we’ve ever known. Peer review has and should continue to be the fundamental pillar of scientific rigor. Nevertheless, I think it can be improved dramatically.

First of all, there has to be some venue for distributing and communicating science, so this is better posed as a more specific question: “What is the benefit of having peer-reviewed journals as we know them?” If you haven’t already, I highly recommend reading this white paper by Michael Eisen and Leslie Vosshall. They are certainly not the first to suggest that our system of journal-based pre-publication peer review merits re-examination, but they are a recent example that has gotten a lot of attention. I will not try to recapitulate their already very well-articulated argument, but I will say that I find it very compelling. More and more, I feel frustrated and convinced that we can do better. For example, back to the rolling dice of peer review. What is the value of a process that is so stochastic that a) Good work often gets delayed by needing to roll the dice many times and b) Any flawed study can overcome its shortcomings by rolling the dice enough times? Until you make this hoop jump your work doesn’t exist, and once you do it’s all but set in stone.

For those of you thinking the answer to the above question is that they keep crap from getting published…

Bwhahhahaahahha…

No really that’s hilarious!

Somebody explain to me again why we have journals?