# I don’t always use LaTeX, but when I do…

Since I haven’t published anything in almost two months, let me jot down one thought that has come to mind frequently over the past few months.

## If you use LaTeX …

Well, first of all, are you sure you have to use LaTeX? By which I mean, are you sure you can’t use Markdown+MathJax or textile+MathJax or restructuredText+MathJax? Especially if you’re teaching your students, are you absolutely sure you are completely and utterly unable to use something simpler? Something that is more modern than learning a hundred bits of print typesetting that your student will never, ever need? Ok, just checking. So…

## If you (have to) use LaTeX, then make HTML your primary output.

By which I mean: don’t just produce PDFs that nobody can read on small screens, PDFs that nobody can read accessibly, PDFs that nobody will want to read in 5 years.

Make HTML your first output. It’s important. HTML is the future engine for mathematical and scientific content. If you can’t produce HTML, then ur doin it rong. If you don’t produce HTML, you won’t ever help all the people working on pushing math on the web forward.

It won’t be trivial but easier than you think. Install LaTeXML and learn how to use it. (Alternatively you probably have a copy of tex4ht installed with your TeX.) How hard is it? This hard:

  latexml --dest=mydoc.xml mydoc.tex
latexmlpost --dest=mydoc.html --format=html5 mydoc.xml


And when you run into LaTeXML limitations, then get over it, report them back, help make it better. If you run into problems with MathJax, report them, help make it better. You need graphics? Check out xyjax. You need pstricks? Check out mathapedia. You need computations? Check out Sage cell server. It’s all there, but you have to get started. To it today.

But if you’ve ever wanted math to be native on the web, then you have to realize that it won’t happen without your help.

If you’re too lazy for converting (e.g., when you’re teaching), then use something that compiles to both TeX and HTML (like markdown+MathJax etc). Pick a decent tool for it, like Qute, ReText, write on the web with FidusWriter or Authorea.com, write in your favorite Mac-editor with Marked, or extend sublimetext, on an iPad app use Writing Kit — and that’s just off the top of my hat; there are many more editing environments that offer good syntax and MathJax integration. Many can save to TeX documents, anything can be converted via pandoc. It’s not perfect yet but it won’t get better unless you give everybody some feedback.

So, if you can, don’t author LaTeX, author into LaTeX. And whatever you do, compile it to HTML. It’s important.

credit: http://www.quickmeme.com/meme/3umuyt/

# Publishers should invest in browser development (a comment at the scholarly kitchen)

In the tradition of posting stuff I write elsewhere, here’s a comment I just posted at the scholarly kitchen. It’s not really about the article.

On a slightly different note. Despite many investments in typesetting technologies in (and of) the past, publishers are investing very little in the primary typesetting technology of the future: HTML rendering engines.

A good example (though I’m biased) is MathML, the W3C standard for mathematical markup. Despite being used in XML publishing workflows for over a decade, and becoming part of HTML5, no browser vendor has ever spent any money on MathML development. Accordingly, browser typesetting “quality” is highly unreliable (unless you use MathJax — which is where I get my bias from).

Trident (Internet Explorer) has no native support (but the excellent MathPlayer plugin), Gecko (Mozilla/Firefox) has good support thanks to volunteer work and WebKit (Chrome, Safari, and now Opera) has partial support — again solely due to volunteer work. (Unfortunately, only Safari is actually using that code; Google recently yanked it out of Chrome after one release.)

This isn’t surprising from a business perspective — for the longest time, there was simply no MathML content on the web. But of course, this was a chicken-and-egg problem: no browser support => no content => no browser support => … And it ignores the impact MathML support would have on the entire educational and scientific sector where it would enable interactivity, accessibility, re-usability, and searchability of mathematical and scientific content. (Including ebooks — MathML is part of the epub3 standard.)

Now you might say MathML is just math, a niche at best. But very likely its success will determine if other scientific markup languages will become native to the web — languages like CellML, ChemML, and data visualization languages. These will probably see even less interest from browser vendors but will have enormous relevance to the scientific community.

Right now, scientific publishers (in my experience) have neither expertise nor interest in browser engine development. Unfortunately, they also don’t put pressure on browser vendors to improve typesetting (whether scientific or otherwise). That’s very short sighted, I think. Given that Gecko and WebKit are open source, a joint effort of publishers could very well fix things — and show the community that publishers have their eyes on the future rather than the past.

# Why academic societies should start fully fledged social networks

The Joint Math Meetings 2013 ended with the AMS’s 125th Anniversary banquet. One of the things mentioned there was that the AMS is working on some form of online communities. That’s great, but doesn’t go far enough.

## 1. It’s their nature

Academic societies have always been social networks. Online, social networks look different from conferences, book ordering and membership areas.

A lot has been done already. Take the AMS. MathsJobs has solved the job search problem, MathSciNet has solved publication research.

But what is needed is true social connectivity. Or in other words: conferences, workshops and seminars. The net connects everyone, all the time. Why leave it to people not caring for the community? The big social networks are important, but they will never help smaller communities like the mathematical one, never provide the tools we need.

## 2. The ultimate appeal

Nobody should trust a social network paid through ads. You may trust it a bit more when you pay for it (e.g., app.net).

But a network run by a society (or a joint venture of societies) would have the ultimate appeal: trust and oversight.

Because societies are democratic, they could establish transparent, democratic oversight over a key technology for the community.

## 3. It’s their mission

But let’s take it at least one step further. Diaspora and other decentralized social networks are a beautiful idea. Societies could move such tools forward, thereby empowering distributed social networks.

On the one hand, members could easily connect across different societies, on the other hand, members could choose to fully control their data on their own servers.

A society that serves its members and community would not be opposed in principle. Which other social network could say the same?

In addition, the underlying software would naturally be open source — both for transparency and scientific reasons.

This would enable everybody to take this important step — a bit of internet enlightenment if you will.

## 4. It’s forward thinking

Social networks are the new publishers. It’s interesting to read this post at the Scholarly Kitchen which looks at societies from the reverse angle, the fact that PeerJ is moving publishing more towards a membership model is important.

Publishers should fear societies since they will always be able to offer something fundamentally different — a self-governing community.

## 5. It has consequences

Right now, the majority of users are on at most one social network, usually Facebook (though mathematicians have sometimes skipped that and followed all the cool kids playing on google+).

I expect the majority of users to soon get comfortable to have multiple networks. This is why I also expect to eventually have better connectors between networks. Granted, this has to do with a lot of major internet issues (net neutrality, walled gardens etc), but I prefer to be optimistic about the future.

But the real consequence is: this will cost money. And members should be ready to pay for it. Just as we should for publishing.

# JMM 2013

Early this morning, I drove down to San Diego to be at the Joint Math Meetings 2013 for the very first time. (Well, last year, I mostly sneaked in to meet friends and didn’t even register — or got to talks, so I guess that’s fair).

It seems ironic and yet fitting that my first JMM is also the first meeting since I left mathematical research (in the traditional and definitely the (for me) previous sense). Representing MathJax is challenging, exciting, and simply a lot of fun.

Of course, meeting up with old friends is an added bonus that’s simply priceless.

# The Forum of Mathematics, blessing or curse?

When the Forum of Mathematics was announced on Tim Gowers’s blog I mentioned this on twitter and I got a couple of replies asking what my thoughts on it were. Well, this post has been stuck in my draft folder for too long, long enough for these journals to be open for submissions, blimey.

2012-11-19 quick correction. People have pointed out in the comments, that Sigma is already going to be a regular old journal (see Gower’s post, comparing it to Combinatorica). I’m not yet sure if that makes it better or worse.

### Open thingamajig

The Forum of Mathematics is a new journal. Well, no, it’s actually two journals, Pi and Sigma (yes, not π and Σ). (This is surprisingly nerdy for the honorable Cambridge University Press — just imagine all the inside jokes you can do with it (but we’ll get back to that).)

These are two journals in mathematics. So far, so boring. They are open access journals, more precisely Gold Open Access, i.e., you pay a fee once, when your publication is accepted, and then your publication is published and available under a permissive license (creative commons in this case).

My two regular readers may know that I’m a big fan of Gold OA as a mid-term solution to our primary publishing problem — which is non-free publishing. A lot of people criticize the level of fees that Gold OA publishing comes with. For example, the biggest OA journal and the first “mega journal”, PLOS One, charges a whopping \$1300 (there are institutional rebates and you can ask for a waiver) — and that’s cheap compared to Springer and Elsevier. But \$1300 is something that most mathematicians do not get their hands on in their grant proposals right now. There are numerous explanations why this isn’t the figure that you should be concerned, e.g., how your library saves money so that your department can fund your fee — and then you usually get the return argument “not everybody works in a department”; and that’s all good and fair and not the topic for this post.

It doesn’t matter.

Gold OA is currently the only viable form of OA on a larger scale.

Yes, we have examples of what some call “diamond OA” in mathematics (Open Access without any fees whatsoever). In fact, my first paper was specifically published in the NYJM because it was diamond OA. Here’s what I based my decision: It was clear that that (any?) paper would not end up in anything fancy, you know, high-profile, glamour mag etc. So I looked for a long time to find one that a) had a publication in my field (the NYJM did) and b) had other publications by respectable people (the NYJM had). So I chose the NYJM and it was a fun experience although there’s actually a very critical post that I have to write about it [[not so much critical of the NYJM, but my own work]].

Back to the Forum of Mathematics. It tries to do the right thing. First off, it’s open access; that’s a good thing. Then it’s Gold OA; that’s the decent thing, and it’s the only way we can be on that level while we lack the infrastructure for diamond OA on that level. (And I’ll get to that, what I mean by “this level”.)

Then they try to be competitive; which is a good thing. Finally, they try to be affordable; which is a good thing. The journals will be free of charge for the first three years. They hope to find donors to keep it free but otherwise will start charging £500/€750 — which is good, that’s still not very low, but at least it’s not what Elsevier and Springer will ask you to pay.

### Now you see me, now you don’t

So that’s the open access part of the story. Now to the special mathematical twist. The Forum of Mathematics comes in two journals, both have the same price tag. What’s the difference? The difference is that Pi stands for (let’s follow Tim Gowers’s suggestion) “primo” and Sigma stands for “secondo”. What does that mean? It means Sigma is what you’d call “PLoS One for mathematics”, it is designed to be a mega journal in the same vein where the refereeing process will only check correctness of your paper (yes, and plagiarism, nonsense etc).

Pi, on the other hand, is aimed (as Gowers’s describes), to become one of the top three journals in mathematics — that’s the goal. For this purpose, it also adds a few fancy innovations. For example, you actually have to write a two page statement for your submission to argue that your work is important enough — which makes it’s goal as transparent as it makes it ridiculous. (But let’s not go there right now.)

### Pleasure and pain

So what are my thoughts on it? My first thought was “thank god, finally somebody is doing something serious about that”. We’re lacking a PloS ONE for mathematics and that’s absolutely clear. In fact, it’s bizarre that we’ve been so far ahead in the game (with the arXiv for 20+ years) and yet we’re so far behind in everything else that’s happened in publishing in the last 10 years. So thankfully somebody said “let’s do PLOS One for mathematics” — that’s a great move.

But my immediate second reaction was: “WTF?!?!”. For me at least, the idea of PLoS ONE is ruined once you add something like Pi.

The whole idea of PLoS ONE is to leave the bickering of editorial boards behind. PLoS is investing quite a bit of money (now that they actually make a profit off PLoS ONE) in the fundamental idea that is PLoS ONE: editorial boards are not very good at identifying what’s important research; they are simply bad at it. (I’ve ranted about editorial boards as the core problem of academic publishing before, no?)

So PLoS ONE goes the other way and says “We don’t care about importance, you check that it’s science and let the community decide what’s important”. How do they do that? Well, for the longest time they didn’t do much, they experimented, tried what they could with their means. But what you see now is a serious investment in alternative metrics, i.e., in means to aggregate the impact that an individual paper has in the community. Not the impact that some editorial board members think the paper should have in their community, no the actual, real impact. The impact of “”How many people get their hands on it?”, How many people read it?”, “How many people leave a comment, talk about it on social networks, on blogs, on whatever else you can think of?”. This is the true democratization of acadmic publishing: to realize that editorial boards are very good at organizing fact-checking but they are unnecessary for identifying what is important to the community. At first sight, this might be more prevalent in the sciences, but in reality it is extremely prevalent in mathematics as well. Mainstream mathematics dictates what’s important (Field’s medal anyone?) and non-mainstream fields can take it as an excuse for their own lack of impact.

Alright, that was maybe too much of a rant. I know that most editors are highly decent people; they are benevolent dictators but they are dictators none the less, and glamour mags editors have just as much power over the community as they do in the sciences.

### The Game of Thrones

So what does Pi turn the idea of a PLoS ONE journal into? It turns it — and this is my second point — into a power grab.

If I was to imagine that Sigma becomes the PLoS ONE of mathematics and imagine that Pi will pick the “great” papers out of Sigma. Then what do you get? You get a mega journal that will collect a considerable amount of mathematical publishing (all of it?), and another that picks the raisins out of it. What’s wrong with that picture?

That picture I’m left with is a massive collection of power within a single editorial board, across the entire publication range in mathematics. If Sigma can capture a large part of mathematical publications (and how could it not? it’s respectable and free!)then the Pi editorial board will have all the power to dictate what is important and what isn’t. Just think about it this way: why would I submit to the NYJM, if I can submit a weak paper to Sigma with the additional, faint hope of making it into Pi?

This is a huge issue!

Now I’m not saying this must happen. Just like with PLoS ONE, there can be competitors sooner or later, probably as soon as it becomes a successful business model. But the damage it can do in the mean time could be considerable.

PLoS ONE started out as an experiment, it was the first of its kind, it had no idea that it would take off to become the biggest journal in history. With Sigma, on the other hand, we know this, and we can see that Pi is designed to profit directly from this potential, picking raisins from the first mega journal in mathematics.

What would that mean to small, enthusiastic, diamond OA journals that exist right now? (Besides, is there enough room for a real competitor?)

Coda.

Sigma is a copycat of PLoS ONE which was founded 10 years ago — we remain that far behind. The only innovation is Pi, which is actually a step backwards (and the lack of alternative metrics, another step backwards).

Where are the really new experiments? Our research is made for the web, to be communicated through the web in text, speech and demonstration. Yet we do not take the experimental playground seriously enough. We simply stay behind everybody else, ready to complain about all the big bad things coming out of the scientific side of publishing.

Could we jump ahead? Based on the experience of MathOverflow and math.SE, no doubt. The mathematical community is open to exploring new ways to do and communicate research.

And I wonder: if Forum of Mathematics is considered a “big experiment” then I fear that we’ll stay behind by 10 years and soon enough we will be behind 20 years.

tl;dr
Great to have a PLoS ONE for mathematics but I worry that the Pi editorial board could end up the absolute, most powerful editorial board in history.

# A virtual Kaffehaus on g+

So that went well.

Two weeks ago I tried to do something that I always wanted to do and that Sam had done a couple of times with a more specific focus. That is, use google+ hangouts to simply meet people.

If you don’t know them by now (go read Sam’s posts!), google+ hangouts are really the only reason to be on google+ for me. I know, I know, there’s tons of mathematicians on google+ and really for a research mathematician it’s probalby the best social network. But that’s besides the point.

For me, the key feature are the hangouts. The hangouts are the first, free video conferencing system that works, in fact amazingly well, with a wealth of features (screensharing, collaborative writing and, of course, pirate hats), with the on air feature, it even allows you to record your hangout and have it on youtube afterwards. In short, it is a pretty good deal (you pay in privacy, of course) and you see a lot of fantastic people using it for all kinds of stuff, e.g. very prominently Barack Obama but also scientists such as Bad Astronomer Phil Plait doing Q&As or virtual star parties, hooking up a telescope to look at your favorite planets. It’s fantastic stuff.

So what would I be doing with the hangouts? I just moved to LA, which means I left a good deal of friends and contacts behind (yet again) and I have the need to literally hang out with friends. Then there are also other people I always wanted to get in touch with. That is all these fantastic bloggers that I got to know on twitter, on their blogs and in other places, that are doing interesting stuff all the time — I would love a chance to talk to them.

Finally, two weeks ago, I tried to have a hangout. I didn’t announce it until it started — and (surprise!) it didn’t work at all. The simple reason was: nobody was around! Desperate that I was, I even made the hangout “public” (which means anybody can join in) which quickly got really weird. Thankfully, my connection immediately crashed when random people showed up and tried talking to me. (I should’ve known better, actually since there are websites that list public hangouts — be careful what you wish for…)

How could I create a hangout as I had wanted? A hangout where you actually want to be open for people to join but not demanding it from them. You don’t want to be open to everybody, but you want to be open to a lot of people, people you may have never met in person but know by some form of communication or another.

Last week, I tried to do it a little bit better and I specifically invited people to an “event”, another google+ feature (as on other social networks) which annoys people with invites to random events that they don’t care about.

To be less offensive, I did this last minute, i.e., the evening before the hangout, and explained the point of this in the “invitation”. Mostly, I wanted to give people ample opportunity to ignore the “invitation” because I wanted to keep the hangout light, informal, no strings attached.

And it actually worked. I got a chance to hook up with one American and two English mathematicians and bloggers that I actually quite admire — Christian Perfect, Vincent Knight and Patrick Honner. All four of us where there for only half an hour but it was wonderful: I had my morning coffee, talked to interesting people in person for the first time and just generally enjoyed being able to connect.

And that’s what I would like to have. The equivalent of a Wiener Kaffehaus, a place where interesting people gather and you’re essentially sure that you’ll run into someone, even though you might not know who exactly or for how long. But when you do, you can sit down, sip your coffee and have a decent conversation.

But any good experiment requires reproduction, so yesterday I followed the same pattern and chanced upon Patrick Honner, Vincent Knight, Dana Ernst, our own Sam Coskey and even Andrew Brooke-Taylor (on g+) stopped by for a few minutes before going to bed (in Japan). Arguably, I talked too much (nobody who’s met me will be surprise), but it was a lot of fun.

Well, that’s three data points. But it has again strengthened my conviction that hangouts/videconferencing will have a huge impact. Don’t get me wrong. We’re not there yet. For example, when Andrew jumped in, I would have loved to “get up from the table” and sit down with him privately to catch up. But nevertheless, hangouts go in the right direction. As a video chat room they are not yet as flexible as a Kaffeehaus, but it feels like we’re almost there and that it’s not the technology per se that’s holding us back anymore (10 video-streams are almost certainly enough for my purposes).

Soon enough, we might get a real Kaffeehaus, where you can sit at a single table following a single conversation, step away for a nice quiet chat (yet overhearing the ongoing conversation) or wander over and meet some new people at some new table.

For mathematics (and research in general) this is a great opportunity, to be able to connect with other researchers (or even the great unknown “public”) in yet another crucial way. If MathOverflow becomes the common room, then video-conferencing could become the coffee shop.

I look forward to trying this again next week. If you want to drop by, just let me know.

# self-publishing, the academic community and LaTeX fanboyism — a comment at Devlin’s Angle

Yet another one of those “Peter babbled too long on somebody else blog”-posts. This time at Keith Devlin’s MAA column/blog Devlin’s Angle

About your reply to Corey’s comment. “That will surely change very quickly” is something I’ve been hearing all my (academic) life but nothing is happening — academia proves highly conservative. The main problem is that the young researchers willing to seriously experiment will often not gain enough “traditional” merit compared to those who just play the game — and those who successfully play the game will rarely see the need to experiment later.

This is a serious problem that would deserve much more effort from the few established researchers that are both influential, established, and open to new ideas: help young researcher get the credit they deserve with their experiments such as self-publishing (can’t help but add: and publish open-access or even open-source). Or in other words: it’s great to hear that self-publishing worked for you, this time, but can somebody else reproduce it?

Finally, LaTeX (as a binary) is nice for producing print output — but practically incapable of doing anything else (and actually, professional typesetters will easily complain about the quality of TeX’s output).

As Peter Rowlett and yourself pointed out, even the best reflow-PDF viewers (Kindle, Nook) are quite limited. However, that is actually the author’s fault. It’s like trying to build an iPad with manufacturing equipment from 1978 (or for that matter, teaching a MOOC in 1978).

So instead of using LaTeX to do what it can’t do — produce content for an html environment — authors need to take the next step and switch to authoring systems that can produce both good print and good html. That’s hard right now, but worth an experimental effort (good keywords: pandoc, asciidoc, restructured-text, sphinx-doc — and I’d volunteer right away to help actually.)

After all, with the adoption of MathML3 in two critical standards (html5 and epub3) and with technologies like MathJax, mathematical content in html finally makes sense.

(Disclaimer: I’m involved in the MathJax development)

Thanks to this discussion on g+, here’s I just had to add another comment

One small addendum. Here’s such an experiment going all the way to XML: Rob Bezeer’s Linear Algebra book http://linear.ups.edu/index.html which (due to it’s flexibility) is part of IDPF’s official ebpub3 sample repository https://code.google.com/p/epub-samples/

# Has it really been a year?

This is a joined post with Sam — go comment at his place!

Almost exactly a year ago, the two of us (Sam and Peter) sat down to talk about what we could do together to help mathematicians using the internet.

A few months earlier, we had started a small project we called SetTheoryTalks, a simple wordpress.com blog that announces and aggregates set-theoretic talks from around the world. Even though it has grown to its own website, STT is, at heart, a very simple service, and we wanted to take another step.

Essentially, we just wanted to do a better job with our professional home pages. Sam for example was quite simply sick of ssh’ing in to the server — it’s not the way professional web pages are updated anymore, and it certainly isn’t conducive to displaying the most up-to-date information. (Peter on the other hand just wanted some cool and shiny stuff that hid his incompetence at web design).

But while we were at it we thought: why can’t we do something that helps everybody? There are all of those weird and sometimes laughable “home pages” that we constantly come across when we search for mathematicians and articles. Can’t we do something about that?

The second issue we wanted to tackle we called “the Zeilberger problem”. The root of this problem is that mathematicians have been using the internet for a very long time. Since the early nineties we have actively used the web for preprints, for notes, for lecture notes, for research material and so on. But meanwhile, the internet has been changing and we have not changed with it. A Mathematician such as Doron Zeilberger can get away with that because of his stature. But other researchers really have no excuse when their web site looks as if it was written in 1992—and moreover makes it extremely hard to interact with the researcher, i.e., uses none of what modern web technology has to offer in terms of interactivity, exchange and generally presentation of content. The web is much more than just hand-written HTML with GIF-tiled backgrounds.

And so we registered a domain and set up shop embracing the wonderful wordpress. It took a while to come up with a name but we chose Booles’ Rings and boolesrings.org.

It’s been quite a ride in the last year. We started out with a group of really just three, including the two of us and Katie Thompson. We slowly expanded to arrive at roughly 10 users, which is not too shabby considering that set theorists aren’t exactly known to be the most outgoing of people. And while we could not have predicted how it would look today, the outcome exhibits exactly what we had hoped: the members of Booles’ Rings are using their sites in quite a diverse fashion.

First of all, a good deal of academic progress has been disseminated on Booles’ Rings. For instance, Joel Hamkins has an incredible academic output and he continually posts his talks and papers. Others like Vika Gitman have followed his lead, posting long summaries of her latest research.

Then you have people like Saf who write detailed notes on research-level mathematics, piling through Todorčević’s lecture notes, and making a serious contribution to the amount of of information that is available on the web. And yet another style can be found at François’s site where short posts with just a quote, a comic, or a problem meet serious academic research and an overflow of ideas from his work at MathOverflow.

Of course, we also had a few positive discussions about blogs, publishing, and interactive home pages in general. While Sam covered everything from refereeing to experimental math-hangouts on G+, Peter went all the way from the ongoing publishing debate to experimenting with a format he calls the “micro-contribution” (a nugget of research that shouldn’t be kept secret but which is too small for a formal journal).

Overall we are very happy with this small ecosystem of articles. But there are still many more things we wish to accomplish. First and foremost, we want to introduce the concept of a dynamic web page to a much wider audience. To do this, we plan to build a repository of documents and tools to help others reproduce our experiment. We also plan to create a version of the site that is open to the public—a version which is more stable and which learns from the Booles’ Rings experiment.

Finally, we want to make Booles’ Rings even more useful to academics by adding more features. While we have developed a few plugins and scripts to help with dissemination and collaboration on boolesrings, a lot more can be done. For instance, we plan to develop a plugin to use your home page as courseware for teaching.

We hope we can count on you to help us get us to the next stage and we look forward to the second year in the life of the Booles and their Rings!

# 11 dreams for the publishing debate, the complete version

This is a double-post of sorts. The reasons is that it took me a very long time to write this (in fact, the first draft is marked April 18). In the process, quite a number of versions were stored by WordPress. As you can see, I ended up splitting the original post into 11 individual ones. (Actually, I ended up extending each of the individual posts, adding another ~1000 words to it, a whopping 4000+ words.) Anyway, the point is, I wouldn’t want to delete this original draft for its history should be quite interesting if I can ever get myself to make all revisions public (there’s some rude ranting in there). So please excuse this double post.

As always, check out the first post for more context.

These are dreams. Some are realistic—perhaps just around the corner; others are way out there—basically crazy. Some will apply to everyone, others only to some. But all have diversity in mind, diversity in our expectations of who researchers are and what they do.

### 1. write fewer original-research papers

I know what you’re going to say. But hear me out. This is at the core: to enable researchers to publish fewer “new result”-papers.

I believe all major problems brought up in the debate are, at the heart, caused by the immense increase in publications — but not the global increase, the personal one. You have to publish far too much/big these days to get a half-decent position/grant. Increasing publication numbers did not increase the quality of research or, for that matter, the “quality of life” in our communities.

Instead, the massive inflation is killing us, devaluating everything we do as researchers. More papers mean that our papers are worth less. Having to publish more papers means we produce more research of questionable quality (unintentionally and otherwise). Especially young researchers have to publish for metrics instead of quality. Worst of all, evaluating researchers only by this steam-punkesque output means that the jobs go to people with this one singular skill — writing the right kind of papers to please the editorial boards of the right kind of journals — leading to an intellectual monoculture instead of diverse, stable, rich communities. In particular, the pressure works against women and minorities as they often start with and continue to face disadvantages in their careers that make it harder to produce the desired “output” in the desired time frame.

(If you’re wondering why I’m not bashing “evil publishers”. I don’t think they are the problem — we are. If you are happy with the inflation of papers-in-journals, then big publishers are what you need.)

### 2. get real credit for surveys, reviews and exposition

Surveys, reviews and expositions are research — nothing more, nothing less. We live in a time where it is actually more important to write expository work. Why? Because we’ve optimized our production pipeline so well that there is no shortage of new (looking) results. Yes, you can argue about “relevance” (I won’t, but I can’t stop you). But if you’re serious about all that “research for research’s-sake”-talk, then you might notice that we’ve figured out how to educate people in the tens of thousands every year that do nothing else but churn out result after result.

As Felix aptly wrote: we need to move beyond “new” results. And this means to step back from them and, well, review. Surveys are the glue that holds our fields together, holds our communities together. We do it all the time — if you read a paper thoroughly, you’ll most likely write a review anyway. As we aggregate, work on larger projects or grant proposals these aggregate and easily become surveys and expositions. We need to make all of these public so that the community and the original author(!) can benefit from this enormous creative output that otherwise might only show up in a reading seminar or a journal club.

Mathematicians usually do even more. If we read a paper, we create our own version of the results. Mathematics need these just like music needs different interpretations. We need to give people who are “just” re-writing proofs the credit they deserve, the credit for keeping results alive, accessible and understandable to a wider audience than the original author and referee.

And with credit, I don’t mean vouchers for some bookstore. Surveys must be first-class citizens! Surveying and reviewing other peoples work belongs on your CV, it is no less original than your “original research” and every department should have people that are exceptional at it, that are better at surveys, reviews or exposition than “new” research and add to the diversity of a department.

### 3. get real credit for refereeing

Refereeing is research, just like surveys and “new results”. It might seem redundant after dream #2 (and in many ways I would say that’s the goal), but given the importance of refereeing right now and the way we do it, it’s separate.

Currently, good refereeing seems nearly impossible. With the increase of publications we can’t even catch up with what’s happening in our own narrowly focused interests — how, then, can we expect to referee properly? Sam tells his own funny but sad story of refereeing honestly but the problem runs deep. Good referees are hard to find and when you find them, they could be writing a paper instead of refereeing — and why should they not?

There are many ways we could improve refereeing. We can (and should) split up the different stages of refereeing (reference checking, correctness checking on different levels, opinion gathering etc), we should open pre- and post-publication peer review (and in-between peer review), we should use alternative methods to collect these reviews (instead of review journals that have at most one opinion per paper).

One key of refereeing is forming an opinion — and voicing it. That’s very hard, in particular for young and disadvantaged researchers who fear significant repercussions. But we don’t need less people sharing their opinions on other researchers’ output, we need more (whether they agree or not); it’s a responsibility both to our own research communities and to a larger society.

But all of these are very unlikely to work, if we don’t find a way to give referees the credit (and criticism) they deserve. Before we can praise referees we need tools to evaluate refereeing, both in current and future form. Above all, we need to be able to put refereeing on the CV, it’s part of the research qualities every strong department should strive for.

### 4. get real credit for communicating

Communicating other people’s work through surveys, exposition and reviews is important. Then there’s communicating to students aka teaching. This is an especially sore point for mathematics where undergraduate teaching has become a blunt tool to weed out graduates for other disciplines — our self-respect seems greatly lacking. And then, of course, there’s spreading the word to the wider public.

Have you ever noticed that many of the great researchers are excellent communicators (and teachers and surveyors and referees)? I would go as far as to say that a truly great researcher will be great in at least one of these ways to communicate. Without this, you’re only a great something-else. We should cherish not just one ability of our truly great minds, but all of them. Right now we promote researchers according to how their “new result”-output compares to the “new result”-output of the truly great ones. Why are we so one dimensional?

Also, communication goes both ways, so we must listen. Can you imagine a graduate-student-run, “Law Review”-like journal for mathematics? Graduate students are perfect for forcing you to reflect on your research — we should encourage them to make their thoughts public (including anonymously and pseudonomously). But we mathematicians need to go further. When Bob Moses was speaking at Michigan earlier this year he argued that history will judge us as math literacy becomes a civil rights issue in the 21st century. Are we listening?

Without communication, we risk the longevity of our own research areas because we won’t be understood by the next generation, by other areas and by society as a whole. But this means something’s gotta give and we need to accept that by giving real credit.

### 5. sharing all our work every way we can

One battle that most scientists are still fighting — full self-archival rights — mathematics has long won. We need to make this happen for everyone. We must use the arXiv, our homepages or (if you must) walled gardens such as academia.edu and researchgate. But we should also embrace more recent alternatives like figshare and github to post all our research publicly — preprints, notes, lecture notes, expository notes, simply everything. Open notebook science is the key, but it’s in its infancy. We need to find ways (many different ones) that work for a larger part of our communities so it becomes easier for people to experiment with it and to make it something even better. It might not be for everyone, but it’s something everybody will benefit from.

However, the truth is that even among mathematicians a large group doesn’t use the arXiv, let alone keep professional homepages deserving the name. I know that especially older researchers often hesitate because technical issues “are more trouble than it’s worth”. This is a challenge and we must argue against this and more importantly help to sort out problems, within departments, within small research communities etc to overcome these obstacles. Approach people, ask them why their papers aren’t availabe and help them put them properly online. In all other cases: Don’t be a Grothendieck.

### 6. publishing in smaller increments

Have you ever read a paper that seemed to hide its true goal because the author wasn’t finished but had to publish something out of the pressure of not having a job otherwise? Have you ever read a paper that made a small but reasonable result look much more than it really was just so that it will make the least publishable unit? Have you ever read a paper that was so badly written that you couldn’t make sense of it?

Paradoxically, one way to publish fewer “new results” papers might be to publish more but differently. For scientists this might seem easier, publishing as the data comes in. But even for mathematics we have all those little results — the small gems, the one-line-proof, the clever counterexample, the good reformulation, the useful shortcut — all those could be published quickly and openly instead of waiting to find enough to “make it a paper”. Just like data, these could be reviewed publicly much more easily and we should get proper credit for doing so (both author and reviewer).

Longer results could (depending on their nature) be published incrementally, with multiple, public revisions. Take the preprint idea one step further and make your writing process public. Use a revision system like github to expose the process. Allow for outside input in your writing process, in your working process. The internet makes us an intricately connected community and we can work together in one big virtual seminar. There are already excellent examples in that area. In mathematics in particular, we have the Polymath project or Ryan O’Donnell’s Analysis of Boolean Functions, a text book he’s working out as a blog.

But as I’ve argued Polymath doesn’t work for very many people so, again, we need many, many more projects like these so that more people have the opportunity to find a way that works for them.

There’s of course a risk — this could create a lot of noise as incrementally published results implode when data turns out to be flawed, proofs disintegrate and general anarchy rears its head! But I think it’s worth the risk. Search technology is constantly improving and good scientific standards should ensure that failed research is marked accordingly. And we have so much to gain! We might be able to finally give credit for failing productively — the main thing researchers do anyway, we fail and fail and fail until we understand what’s going on. Sometimes we have to give up, but why shouldn’t somebody else continue for us?

Even if your research implodes, you should get credit and, much, much more importantly, you will help others not to repeat your mistake. Fred once worked on a nice old problem which, after a few months, clearly didn’t get anywhere. But he realized that all his attacks had probably been attempted before and so he wrote a pre-print on all the ways to fail, published his code along with it so that people in the future might benefit from his failure. Or as Doron Zeilberger wrote: if only John von Neumann’s maid had saved all the notes he’d thrown away each day!

Or to put it differently: the most exciting result in 2011 was having no result about Con(PA).

### 7. an affordable open access model

Research publications should be free, for both authors and readers. When it comes to traditional journals, there are already some in mathematics (e.g. NYJM) that offer open access without any fees. I believe we can move to a journal system that is entirely open access and without publishing fees but we’re not there yet. Mathematical journals are said to have profit margins of 90% so we should be the first to get there. Gold open access is already realistic and, more importantly, can be made affordable right now. With peerj, this is already around the corner on a much larger scale.

On the other hand, it seems natural to me to return academic publications to universities and academic societies. For journals, this could simply be done PLoS-ONE style (checking correctness, not “importance”) and our institutions could certainly make such journals open access, non-profit and actually free (just have each department produce one journal). But new ways of doing and sharing research will hardly fit into the journal model. These methods will be much more user centric, will be about people, not publishers. And the natural place to store information about people is their professional homepage as a repository of their work. It seems natural that universities or academic societies could play a much better role in this then proprietary social networks.

However, one problem we’d be facing is that journal publishing is the cash cow of many societies and this money is often used to cross-finance important services. This is not helpful in a time where publishing is a button. We’ll have to find another way to finance things that need financing and we’ll have to talk about that, too. Additionally, if we move scientific “publishing” to the next level, there will be new costs: costs for research into doing it, costs for experiments, costs for failures. We need to talk about those, too.

### 8. a cultural change of doing research (and metrics for it)

If we publish less “new results”, some structural problems might just disappear. Fewer papers means fewer journals means fewer subscriptions means fewer refereeing. Smaller workload and smaller costs. But this can only work if we have tools to nevertheless show what happens in between writing-down-a-decent-result-which-takes-years-dammit. Thanks to the internet, we can actually hope to do this. But the internet will also change everything (again and again) and it will change our communities (again and again). We need to invest resources right now to be able to benefit from this change, find a way to evolve into a work mode that is more appropriate for what is yet to come than what was in the past. We need to get into a constant-change mentality and we need to make this worth everybody’s while.

Our funding bodies, academic societies, universities and other institutions must fund more experiments for doing research differently and the metrics needed for this. At the very least, we need more incubators like Macmillan’s Digital-Science, but also in a non-profit fashion along the lines of MathJax‘s business model. Reversely, our communities must value anybody’s effort of joining new experimental platforms such as MO/math.SE, blogs, citizen science projects, polymath-esque projects, wikipedia and all those platforms that are still only dreams. The altmetrics people are already setting many good examples and platforms such as Stackexchange and Wikipedia are working hard to develop reputation technology that will allow us to measure people’s activity in these new scientific environments.

This, of course, means breaking out of the mono-culture of “papers in journals” — a rough cultural change. Nobody talks about this drastic change which, I believe, makes it is the biggest problem in this entire debate. If we change the way we do (and evaluate) research, then we ask incredibly much of the people who are working well in the current model, who are good at (only?) writing the right kind of papers for the right kind of journals. It would be a revolution if people were hired because their non-traditional research activities outweigh the traditional paper count of other applicants — in other words, if hiring would happen strategically, with that kind of diversity in mind. Case in point: even Michael Nielsen overlooks this problem completely in his wonderful book.

This change is even more important for smaller research areas (and must be made to work for them) who can’t play the impact factor game. Failing to adapt, might even mean extinction in this case. But the potential is equally great as more diverse ways of doing research can also mean a better chance to work across fields and improve collaboration and visibility of small fields.

### 9. propagating the Shelah Model — encouraging bad writers to seek out good writers

This may seem a very math-specific dream, possibly extending to the humanities, but it does apply to the sciences in more than one way.

Not only do we have too many “new result” papers, we also have too many horribly written papers. Although there’s certainly a talent to being a great writer of research publications, we’re facing the problem that our communities just down care enough to enforce even mediocre standards of writing. This is particularly hard with leading researchers who will find their manuscripts barely scrutinized.

Much worse, however, young researchers have an incentive not to care for the quality of their writing and the work necessary for it. Instead the motto is: “Just get it by the referee and be done with it! You could produce a new result while you waste your time revising this one!”. Referees and editors in turn have little incentive to spend their unpaid time improving a paper, so it’s easier to simply dismiss a paper or ignore the communicative shortcomings (after all, the referee understood the damn thing…).

Papers are written so that their author(s) can forget their content and move on to other things.

The lack of quality control in academic writing endangers long term accessibility of our research as much as data supplements in proprietary formats. In the long run, it makes papers less trustworthy and compromises the greatest strength of research, to build on earlier work. Simply put, archival is rather pointless if nobody can comprehend the content.

Of course, some people are more skilled, some are less skilled as writers. So why not join them up? Saharon Shelah has published over 1,000 papers with 220 co-authors. Not only does the number of co-authors allow Shelah to produce more papers, it also allows the community to understand them better. Yet some of his co-authors are mocked as “Shelah’s secretaries”, supposedly not publishing enough “alone”. Besides being pathetic ad-hominem attacks, this completely overlooks the fact that in many cases only these co-authors make Shelah’s incredible output accessible to the research community.

Let’s have editors and referees tell bad writers to find a capable co-author instead. It should be a win-win situation — good writers will get to be on more papers and researchers lacking in writing skills get their thoughts polished so that other people can actually build on their work. As a bonus, the papers get some pre-publication peer-review and editors and referees shoulder a smaller workload. All we have to do is give up the notion that only “new results” are acceptable research currency. It seems a small price to pay.

And don’t call them secretaries, when they really are smiths. They might not mine the ore, but without them it’s all a pretty useless lump of rock.

### 10. getting from the come-to-me mentality to the go-out-and-find-them mentality.

Sometimes in my dreams, somebody screams “I need journal rankings and impact factors because that’s the only way to weed out 600 applicants” and I awake sweating, panicking, thinking: you’re doing it wrong!

It’s quite simple really. If you have a job which attracts 600 serious applicants you should be headhunting to find the best fit — not passively wait for applications to pile on a search committee’s desk. Of course, the current system does not allow such a behavior. But without a doubt this is a better strategy for hiring a candidate, a strategy with an actual chance of finding a good fit for your department, for all aspects of research.

This simple idea is far fetched given the established system. There would be many challenges to change our culture in this direction, not the least of which is keeping nepotism in check. I don’t think it will really happen, actually, even if I dream it would.

But to use impact factors instead is simply lazy. The fact that we rely on this is actually damaging to our community as it is a metric that can be badly gamed and is heavily biased towards mainstream research. Needless to say, hiring committees are yet another research activity which deserves much more recognition. Only if this work gets proper acknowledgement is there any chance to spend more resources on a productive strategy for one of the highest impact activities of any researcher — hiring fellow faculty members.

### 11. a democratization of the communities

Here’s my biggest dream: a democratically organized scientific community. With this I’m not talking about the challenges of representing interests of scientific communities within a democratic society. Also, I’m not talking about the democratic aspects of citizen science. (Both of these are extremely important, of course.)

Instead, I’m baffled by the aristocratic and often oligarchic structures of scientific communities. Can you imagine editorial boards or grant committees being appointed through a democratic process, say through a parliament of researchers? And can you then imagine this parliament to be elected by all researchers of a specific field — the faculty of small colleges, the researchers in the industry, the grad students, the postdocs, they’d all get the same vote as prize winning researchers in this?

Noam Nisan once wrote a quote that (brutally out of context) reminds me of 18th century aristocracy:

[...] one shudders at the thought of papers being evaluated by popular vote rather than by trusted experts.

It seems true enough until you ask: who decides which experts are considered trusted?

One of the biggest problems of academia today is that positions of power and responsibility are assigned by what is currently considered academic merit — basically, the “new results” count, impact factors, blah blah. This often leads to problems, since researchers who are exceptional at producing new results are often poor at managing our communities. But there’s something more fundamental at play: meritocracy only seems fine until you notice there’s no such thing. Academia is not about merit, it is about reputation — and reputation is currently exclusively determined by the journals you publish in and hence the editorial boards that think your research is “interesting”. However, appointments to these editorial boards are essentially oligarchic. Could it be different? After all, it’s not like there’s nothing that we organize by popular vote in our society — we organize everything so that trust is connected to popular vote.

Democracy is the best of the worst — in academia as elsewhere. But in academia, democracy hasn’t really been possible until the advent of the internet. It used to be that we were all so disconnected that each prof had to be the lone ruler over a small academic duchy. But the net brings us so close together that we can constantly work in much larger groups, our collaborations span continents, our communication is instantaneous and world wide. We are so connected that democratic decisions are not only possible, they are inevitable.

This may sound like a revolution but it isn’t. It really isn’t. If we had an election today for Grand Nagus of Mathematics — how would Tim Gowers not win? That is to say, if we had elections, we’d most likely vote for the same people who are in charge now.

But at least we could hold them accountable. You see, what still comes back to haunt me is this quote from Tim Gowers:

I have often had to referee papers that seem completely uninteresting. But these papers will often have a string of references and will state that they are answering questions from those references. In short, there are little communities of mathematicians out there who are carrying out research that has no impact at all on what one might call the big story of mathematics. Nevertheless, it is good that these researchers exist. To draw an analogy with sport, if you want your country to produce grand slam winners at tennis, you want to have a big infrastructure that includes people competing at lower levels, people who play for fun, and so on.

As true as the comment appears, it comes across as terribly elitist. The huddled masses are tolerated by the elite only because their uninteresting efforts (is there a greater insult?) are needed for justification. This is, of course, completely upside down. It is the large body of hard working “average” researchers that ensures the future of our community and allows a few talented minds to go to extremes. The average researchers are the ones giving them the enormous privilege of pursuing pure research idly.

Maybe we could use a new constitution for our scientific communities. And then let’s have elections. And then let’s have transparency and accountability.

Oh well, it’s only a dream.

# 11 dreams for the publishing debate — #11 a democratization of the communities

And now the conclusion.
Each new post will start from the top, so scroll down a little if you’ve read the previous one — but also check out the first post for some motivation.

These are dreams. Some are realistic—perhaps just around the corner; others are way out there—basically crazy. Some will apply to everyone, others only to some. But all have diversity in mind, diversity in our expectations of who researchers are and what they do.

### 1. write fewer original-research papers

I know what you’re going to say. But hear me out. This is at the core: to enable researchers to publish fewer “new result”-papers.

I believe all major problems brought up in the debate are, at the heart, caused by the immense increase in publications — but not the global increase, the personal one. You have to publish far too much/big these days to get a half-decent position/grant. Increasing publication numbers did not increase the quality of research or, for that matter, the “quality of life” in our communities.

Instead, the massive inflation is killing us, devaluating everything we do as researchers. More papers mean that our papers are worth less. Having to publish more papers means we produce more research of questionable quality (unintentionally and otherwise). Especially young researchers have to publish for metrics instead of quality. Worst of all, evaluating researchers only by this steam-punkesque output means that the jobs go to people with this one singular skill — writing the right kind of papers to please the editorial boards of the right kind of journals — leading to an intellectual monoculture instead of diverse, stable, rich communities. In particular, the pressure works against women and minorities as they often start with and continue to face disadvantages in their careers that make it harder to produce the desired “output” in the desired time frame.

(If you’re wondering why I’m not bashing “evil publishers”. I don’t think they are the problem — we are. If you are happy with the inflation of papers-in-journals, then big publishers are what you need.)

### 2. get real credit for surveys, reviews and exposition

Surveys, reviews and expositions are research — nothing more, nothing less. We live in a time where it is actually more important to write expository work. Why? Because we’ve optimized our production pipeline so well that there is no shortage of new (looking) results. Yes, you can argue about “relevance” (I won’t, but I can’t stop you). But if you’re serious about all that “research for research’s-sake”-talk, then you might notice that we’ve figured out how to educate people in the tens of thousands every year that do nothing else but churn out result after result.

As Felix aptly wrote: we need to move beyond “new” results. And this means to step back from them and, well, review. Surveys are the glue that holds our fields together, holds our communities together. We do it all the time — if you read a paper thoroughly, you’ll most likely write a review anyway. As we aggregate, work on larger projects or grant proposals these aggregate and easily become surveys and expositions. We need to make all of these public so that the community and the original author(!) can benefit from this enormous creative output that otherwise might only show up in a reading seminar or a journal club.

Mathematicians usually do even more. If we read a paper, we create our own version of the results. Mathematics need these just like music needs different interpretations. We need to give people who are “just” re-writing proofs the credit they deserve, the credit for keeping results alive, accessible and understandable to a wider audience than the original author and referee.

And with credit, I don’t mean vouchers for some bookstore. Surveys must be first-class citizens! Surveying and reviewing other peoples work belongs on your CV, it is no less original than your “original research” and every department should have people that are exceptional at it, that are better at surveys, reviews or exposition than “new” research and add to the diversity of a department.

### 3. get real credit for refereeing

Refereeing is research, just like surveys and “new results”. It might seem redundant after dream #2 (and in many ways I would say that’s the goal), but given the importance of refereeing right now and the way we do it, it’s separate.

Currently, good refereeing seems nearly impossible. With the increase of publications we can’t even catch up with what’s happening in our own narrowly focused interests — how, then, can we expect to referee properly? Sam tells his own funny but sad story of refereeing honestly but the problem runs deep. Good referees are hard to find and when you find them, they could be writing a paper instead of refereeing — and why should they not?

There are many ways we could improve refereeing. We can (and should) split up the different stages of refereeing (reference checking, correctness checking on different levels, opinion gathering etc), we should open pre- and post-publication peer review (and in-between peer review), we should use alternative methods to collect these reviews (instead of review journals that have at most one opinion per paper).

One key of refereeing is forming an opinion — and voicing it. That’s very hard, in particular for young and disadvantaged researchers who fear significant repercussions. But we don’t need less people sharing their opinions on other researchers’ output, we need more (whether they agree or not); it’s a responsibility both to our own research communities and to a larger society.

But all of these are very unlikely to work, if we don’t find a way to give referees the credit (and criticism) they deserve. Before we can praise referees we need tools to evaluate refereeing, both in current and future form. Above all, we need to be able to put refereeing on the CV, it’s part of the research qualities every strong department should strive for.

### 4. get real credit for communicating

Communicating other people’s work through surveys, exposition and reviews is important. Then there’s communicating to students aka teaching. This is an especially sore point for mathematics where undergraduate teaching has become a blunt tool to weed out graduates for other disciplines — our self-respect seems greatly lacking. And then, of course, there’s spreading the word to the wider public.

Have you ever noticed that many of the great researchers are excellent communicators (and teachers and surveyors and referees)? I would go as far as to say that a truly great researcher will be great in at least one of these ways to communicate. Without this, you’re only a great something-else. We should cherish not just one ability of our truly great minds, but all of them. Right now we promote researchers according to how their “new result”-output compares to the “new result”-output of the truly great ones. Why are we so one dimensional?

Also, communication goes both ways, so we must listen. Can you imagine a graduate-student-run, “Law Review”-like journal for mathematics? Graduate students are perfect for forcing you to reflect on your research — we should encourage them to make their thoughts public (including anonymously and pseudonomously). But we mathematicians need to go further. When Bob Moses was speaking at Michigan earlier this year he argued that history will judge us as math literacy becomes a civil rights issue in the 21st century. Are we listening?

Without communication, we risk the longevity of our own research areas because we won’t be understood by the next generation, by other areas and by society as a whole. But this means something’s gotta give and we need to accept that by giving real credit.

### 5. sharing all our work every way we can

One battle that most scientists are still fighting — full self-archival rights — mathematics has long won. We need to make this happen for everyone. We must use the arXiv, our homepages or (if you must) walled gardens such as academia.edu and researchgate. But we should also embrace more recent alternatives like figshare and github to post all our research publicly — preprints, notes, lecture notes, expository notes, simply everything. Open notebook science is the key, but it’s in its infancy. We need to find ways (many different ones) that work for a larger part of our communities so it becomes easier for people to experiment with it and to make it something even better. It might not be for everyone, but it’s something everybody will benefit from.

However, the truth is that even among mathematicians a large group doesn’t use the arXiv, let alone keep professional homepages deserving the name. I know that especially older researchers often hesitate because technical issues “are more trouble than it’s worth”. This is a challenge and we must argue against this and more importantly help to sort out problems, within departments, within small research communities etc to overcome these obstacles. Approach people, ask them why their papers aren’t availabe and help them put them properly online. In all other cases: Don’t be a Grothendieck.

### 6. publishing in smaller increments

Have you ever read a paper that seemed to hide its true goal because the author wasn’t finished but had to publish something out of the pressure of not having a job otherwise? Have you ever read a paper that made a small but reasonable result look much more than it really was just so that it will make the least publishable unit? Have you ever read a paper that was so badly written that you couldn’t make sense of it?

Paradoxically, one way to publish fewer “new results” papers might be to publish more but differently. For scientists this might seem easier, publishing as the data comes in. But even for mathematics we have all those little results — the small gems, the one-line-proof, the clever counterexample, the good reformulation, the useful shortcut — all those could be published quickly and openly instead of waiting to find enough to “make it a paper”. Just like data, these could be reviewed publicly much more easily and we should get proper credit for doing so (both author and reviewer).

Longer results could (depending on their nature) be published incrementally, with multiple, public revisions. Take the preprint idea one step further and make your writing process public. Use a revision system like github to expose the process. Allow for outside input in your writing process, in your working process. The internet makes us an intricately connected community and we can work together in one big virtual seminar. There are already excellent examples in that area. In mathematics in particular, we have the Polymath project or Ryan O’Donnell’s Analysis of Boolean Functions, a text book he’s working out as a blog.

But as I’ve argued Polymath doesn’t work for very many people so, again, we need many, many more projects like these so that more people have the opportunity to find a way that works for them.

There’s of course a risk — this could create a lot of noise as incrementally published results implode when data turns out to be flawed, proofs disintegrate and general anarchy rears its head! But I think it’s worth the risk. Search technology is constantly improving and good scientific standards should ensure that failed research is marked accordingly. And we have so much to gain! We might be able to finally give credit for failing productively — the main thing researchers do anyway, we fail and fail and fail until we understand what’s going on. Sometimes we have to give up, but why shouldn’t somebody else continue for us?

Even if your research implodes, you should get credit and, much, much more importantly, you will help others not to repeat your mistake. Fred once worked on a nice old problem which, after a few months, clearly didn’t get anywhere. But he realized that all his attacks had probably been attempted before and so he wrote a pre-print on all the ways to fail, published his code along with it so that people in the future might benefit from his failure. Or as Doron Zeilberger wrote: if only John von Neumann’s maid had saved all the notes he’d thrown away each day!

Or to put it differently: the most exciting result in 2011 was having no result about Con(PA).

### 7. an affordable open access model

Research publications should be free, for both authors and readers. When it comes to traditional journals, there are already some in mathematics (e.g. NYJM) that offer open access without any fees. I believe we can move to a journal system that is entirely open access and without publishing fees but we’re not there yet. Mathematical journals are said to have profit margins of 90% so we should be the first to get there. Gold open access is already realistic and, more importantly, can be made affordable right now. With peerj, this is already around the corner on a much larger scale.

On the other hand, it seems natural to me to return academic publications to universities and academic societies. For journals, this could simply be done PLoS-ONE style (checking correctness, not “importance”) and our institutions could certainly make such journals open access, non-profit and actually free (just have each department produce one journal). But new ways of doing and sharing research will hardly fit into the journal model. These methods will be much more user centric, will be about people, not publishers. And the natural place to store information about people is their professional homepage as a repository of their work. It seems natural that universities or academic societies could play a much better role in this then proprietary social networks.

However, one problem we’d be facing is that journal publishing is the cash cow of many societies and this money is often used to cross-finance important services. This is not helpful in a time where publishing is a button. We’ll have to find another way to finance things that need financing and we’ll have to talk about that, too. Additionally, if we move scientific “publishing” to the next level, there will be new costs: costs for research into doing it, costs for experiments, costs for failures. We need to talk about those, too.

### 8. a cultural change of doing research (and metrics for it)

If we publish less “new results”, some structural problems might just disappear. Fewer papers means fewer journals means fewer subscriptions means fewer refereeing. Smaller workload and smaller costs. But this can only work if we have tools to nevertheless show what happens in between writing-down-a-decent-result-which-takes-years-dammit. Thanks to the internet, we can actually hope to do this. But the internet will also change everything (again and again) and it will change our communities (again and again). We need to invest resources right now to be able to benefit from this change, find a way to evolve into a work mode that is more appropriate for what is yet to come than what was in the past. We need to get into a constant-change mentality and we need to make this worth everybody’s while.

Our funding bodies, academic societies, universities and other institutions must fund more experiments for doing research differently and the metrics needed for this. At the very least, we need more incubators like Macmillan’s Digital-Science, but also in a non-profit fashion along the lines of MathJax‘s business model. Reversely, our communities must value anybody’s effort of joining new experimental platforms such as MO/math.SE, blogs, citizen science projects, polymath-esque projects, wikipedia and all those platforms that are still only dreams. The altmetrics people are already setting many good examples and platforms such as Stackexchange and Wikipedia are working hard to develop reputation technology that will allow us to measure people’s activity in these new scientific environments.

This, of course, means breaking out of the mono-culture of “papers in journals” — a rough cultural change. Nobody talks about this drastic change which, I believe, makes it is the biggest problem in this entire debate. If we change the way we do (and evaluate) research, then we ask incredibly much of the people who are working well in the current model, who are good at (only?) writing the right kind of papers for the right kind of journals. It would be a revolution if people were hired because their non-traditional research activities outweigh the traditional paper count of other applicants — in other words, if hiring would happen strategically, with that kind of diversity in mind. Case in point: even Michael Nielsen overlooks this problem completely in his wonderful book.

This change is even more important for smaller research areas (and must be made to work for them) who can’t play the impact factor game. Failing to adapt, might even mean extinction in this case. But the potential is equally great as more diverse ways of doing research can also mean a better chance to work across fields and improve collaboration and visibility of small fields.

### 9. propagating the Shelah Model — encouraging bad writers to seek out good writers

This may seem a very math-specific dream, possibly extending to the humanities, but it does apply to the sciences in more than one way.

Not only do we have too many “new result” papers, we also have too many horribly written papers. Although there’s certainly a talent to being a great writer of research publications, we’re facing the problem that our communities just down care enough to enforce even mediocre standards of writing. This is particularly hard with leading researchers who will find their manuscripts barely scrutinized.

Much worse, however, young researchers have an incentive not to care for the quality of their writing and the work necessary for it. Instead the motto is: “Just get it by the referee and be done with it! You could produce a new result while you waste your time revising this one!”. Referees and editors in turn have little incentive to spend their unpaid time improving a paper, so it’s easier to simply dismiss a paper or ignore the communicative shortcomings (after all, the referee understood the damn thing…).

Papers are written so that their author(s) can forget their content and move on to other things.

The lack of quality control in academic writing endangers long term accessibility of our research as much as data supplements in proprietary formats. In the long run, it makes papers less trustworthy and compromises the greatest strength of research, to build on earlier work. Simply put, archival is rather pointless if nobody can comprehend the content.

Of course, some people are more skilled, some are less skilled as writers. So why not join them up? Saharon Shelah has published over 1,000 papers with 220 co-authors. Not only does the number of co-authors allow Shelah to produce more papers, it also allows the community to understand them better. Yet some of his co-authors are mocked as “Shelah’s secretaries”, supposedly not publishing enough “alone”. Besides being pathetic ad-hominem attacks, this completely overlooks the fact that in many cases only these co-authors make Shelah’s incredible output accessible to the research community.

Let’s have editors and referees tell bad writers to find a capable co-author instead. It should be a win-win situation — good writers will get to be on more papers and researchers lacking in writing skills get their thoughts polished so that other people can actually build on their work. As a bonus, the papers get some pre-publication peer-review and editors and referees shoulder a smaller workload. All we have to do is give up the notion that only “new results” are acceptable research currency. It seems a small price to pay.

And don’t call them secretaries, when they really are smiths. They might not mine the ore, but without them it’s all a pretty useless lump of rock.

### 10. getting from the come-to-me mentality to the go-out-and-find-them mentality.

Sometimes in my dreams, somebody screams “I need journal rankings and impact factors because that’s the only way to weed out 600 applicants” and I awake sweating, panicking, thinking: you’re doing it wrong!

It’s quite simple really. If you have a job which attracts 600 serious applicants you should be headhunting to find the best fit — not passively wait for applications to pile on a search committee’s desk. Of course, the current system does not allow such a behavior. But without a doubt this is a better strategy for hiring a candidate, a strategy with an actual chance of finding a good fit for your department, for all aspects of research.

This simple idea is far fetched given the established system. There would be many challenges to change our culture in this direction, not the least of which is keeping nepotism in check. I don’t think it will really happen, actually, even if I dream it would.

But to use impact factors instead is simply lazy. The fact that we rely on this is actually damaging to our community as it is a metric that can be badly gamed and is heavily biased towards mainstream research. Needless to say, hiring committees are yet another research activity which deserves much more recognition. Only if this work gets proper acknowledgement is there any chance to spend more resources on a productive strategy for one of the highest impact activities of any researcher — hiring fellow faculty members.

### 11. a democratization of the communities

Here’s my biggest dream: a democratically organized scientific community. With this I’m not talking about the challenges of representing interests of scientific communities within a democratic society. Also, I’m not talking about the democratic aspects of citizen science. (Both of these are extremely important, of course.)

Instead, I’m baffled by the aristocratic and often oligarchic structures of scientific communities. Can you imagine editorial boards or grant committees being appointed through a democratic process, say through a parliament of researchers? And can you then imagine this parliament to be elected by all researchers of a specific field — the faculty of small colleges, the researchers in the industry, the grad students, the postdocs, they’d all get the same vote as prize winning researchers in this?

Noam Nisan once wrote a quote that (brutally out of context) reminds me of 18th century aristocracy:

[...] one shudders at the thought of papers being evaluated by popular vote rather than by trusted experts.

It seems true enough until you ask: who decides which experts are considered trusted?

One of the biggest problems of academia today is that positions of power and responsibility are assigned by what is currently considered academic merit — basically, the “new results” count, impact factors, blah blah. This often leads to problems, since researchers who are exceptional at producing new results are often poor at managing our communities. But there’s something more fundamental at play: meritocracy only seems fine until you notice there’s no such thing. Academia is not about merit, it is about reputation — and reputation is currently exclusively determined by the journals you publish in and hence the editorial boards that think your research is “interesting”. However, appointments to these editorial boards are essentially oligarchic. Could it be different? After all, it’s not like there’s nothing that we organize by popular vote in our society — we organize everything so that trust is connected to popular vote.

Democracy is the best of the worst — in academia as elsewhere. But in academia, democracy hasn’t really been possible until the advent of the internet. It used to be that we were all so disconnected that each prof had to be the lone ruler over a small academic duchy. But the net brings us so close together that we can constantly work in much larger groups, our collaborations span continents, our communication is instantaneous and world wide. We are so connected that democratic decisions are not only possible, they are inevitable.

This may sound like a revolution but it isn’t. It really isn’t. If we had an election today for Grand Nagus of Mathematics — how would Tim Gowers not win? That is to say, if we had elections, we’d most likely vote for the same people who are in charge now.

But at least we could hold them accountable. You see, what still comes back to haunt me is this quote from Tim Gowers:

I have often had to referee papers that seem completely uninteresting. But these papers will often have a string of references and will state that they are answering questions from those references. In short, there are little communities of mathematicians out there who are carrying out research that has no impact at all on what one might call the big story of mathematics. Nevertheless, it is good that these researchers exist. To draw an analogy with sport, if you want your country to produce grand slam winners at tennis, you want to have a big infrastructure that includes people competing at lower levels, people who play for fun, and so on.

As true as the comment appears, it comes across as terribly elitist. The huddled masses are tolerated by the elite only because their uninteresting efforts (is there a greater insult?) are needed for justification. This is, of course, completely upside down. It is the large body of hard working “average” researchers that ensures the future of our community and allows a few talented minds to go to extremes. The average researchers are the ones giving them the enormous privilege of pursuing pure research idly.

Maybe we could use a new constitution for our scientific communities. And then let’s have elections. And then let’s have transparency and accountability.

Oh well, it’s only a dream.