So you’ve just finished your first manuscript and, like any eager graduate student, you decide to submit to [rockstar journal]. It may work out…but for most there’s always a contingency plan. Increasingly, that plan relies on “Open Access” journals such as PLoS ONE or Nature Scientific Reports. But with Open Access publishing comes hordes of competition, so a fledgling scientist is always looking to stand out.
Some post on Research Gate or submit their work to specialized corners of Reddit while others dive headlong into the Twitterverse. Some never promote their work directly, but attract attention with blogs and Youtube Channels. These approaches are not guaranteed strategies, so while we wait for citations to accrue (akin to watching grass grow) how can we tell whether we are on the right track. Altmetric, while still in its infancy, is a reasonable gauge for how much online attention your work has received.
To provide an example (and for shameless self-promotion), this is the Altmetric Round-up for my most recent publication at Nature Scientific Reports. For (stark) contrast, here is my advisor’s most recent Altmetric Round-up. Note that it tracks even the links you post, so uncouthly spamming the interwebz will not go unnoticed.
Some graduate students are more interested in public outreach and popularization of science, Altmetric also provides demographics on who is tweeting about your work. While I’m not sure how Altmetric/Twitter determines whether a user is a scientist…but they claim almost about a third of those discussing my work are not residents of the “Ivory Tower”.
Finally, I’d like to suggest a more idealistic means of carving out an academic niché: Expert-Crowdsourced Outreach. It may seem outlandish, but these are becoming increasingly popular accoutrements for job applications.
- Hone your accuracy and precision by crafting concise answers on /r/AskScience forum (protip: gamify it by joining their panel of scientist and earn some sweet flair).
- Practice explaining your science without confusing jargon by heading over to /r/ELI5. A subreddit where the goal is to explain complicated topics to an imaginary 5 year old.
- Build programmer skills and “street cred” by submitting and/or answering questions on StackOverflow. A meritocratic community-support base for all-things programming (yes, this includes R).
- For those who are computational researchers, pushing your work to a GitHub repository allows for potential collaborations and serves as a digital portfolio, resumé, and features productivity metrics that track how often, how much, and the time of day you work (for better or worse).
Perhaps you can answer these questions for me:
1) How are news stories picked up? I know that there were several news stories on a paper I altmetric’d, but it only picked up one. It seems that it only returns those that link back directly to the paper’s publication page. Attempting to do this with a news press page doesn’t work because it doesn’t have a DOI that Altmetric works with it seems.
2) Do you think Altmetric has a place in replacing, or being in conjunction with, JIF?
Thanks for sharing. I did see some tweets I didn’t know existed on my own paper, so that’s interesting.
//Now following the blog.
With regard to the second question, have a look at the San Francisco Declaration on Research Assessment (DORA).  DORA addresses research quality metrics and calls for revision of the use of the Journal Impact Factor. For a document written by very established researchers, the DORA (and accompanying press releases) mention “early-stage investigators” a lot.
Initiatives like DORA are important because they help to create the space for the old measures to be refreshed or put aside so that new quality measures to emerge. Alternative metrics (like Altmetrics) are needed and organisations must be willing to embrace them.
This is a (very) abridged summary of my latest Research Whisperer post, Exploring an open future.
 San Francisco Declaration on Research Assessment (DORA)
1.) Your intuition matches my own but I have some more anecdotal evidence to support it. When I link to my paper via Bitly, Altmetric doesn’t pick it up. That being said, Twitter’s automated link shortening doesn’t break their webcrawler.
Regarding PopSci articles…take a look at my PI’s Altmetric page under the “News” and “Blogs”. Those were automatically picked up but its not immediately apparent why. Some clearly link the original article while others only mention titles and authors.
For further digging, their API documentation is available. I’d love to subscribe, but the fees are outlandish.
2.) That’s a topic for a whole other blog but I’ll give you the basics on where I stand. Most academics agree that JIF is broken. Most haven’t the slightest idea on how it can/will be fixed. I think Altmetric is in a good position to guide the transition to more refined metrics. Inevitably, I foresee a system with a variety of different metrics.
I’d love to see the following scores applied to publications:
(a) Open-ness (metric based on journal, data availability, code availability)
(Which should obviously be called the Schwartz Index)
(b) Scientific Impact (old school metric based on citations, journal standing, etc)
(c) Social Impact (metric based on depth and breadth of attention from online sources)
I suggest these metrics primarily because they can be automated in the traditional sense.
I’m curious to seeing this further developed, but I am also interested in whether source of information will be weighted. Does a blog whose blogger is ‘certified’ or ‘credentialed’ have more weight than one that simply mentions it without any analysis? I could imagine that there would be issues with blogspam, which could inflate numbers by blogbots that create links to journal pages in order to inflate numbers. Same with Twitter.
Looking forward to more posts.
Agreed, but hopefully we’ve got smart people who are well aware of those pitfalls.
For now, they seem to be using a de facto “certification” system in that they’re only crawling specific pages and programs. A great example of this is that they report the number of saves in Mendeley but not Papers. I should note that I didn’t expect Vice’s MOTHERBOARD to be on the list of “trusted” sources.
My hunch is that they weigh social media references to an article differently if the poster’s name matches any of the publication’s authors.
We appreciate the follow, and hope to keep the posts flowing!
Pingback: Reddit/r/academia | Digital Scholars
At the most basic level, please don’t forget to scrub out the metadata before you submit you papers. Blind review isn’t blind if reviewers can peek behind the curtain.
I’ve had two requests for review recently where authors had left their details on the papers (not even in the metadata). Ironically, I do research into privacy.
I consider myself a tech guru, but that just blew my mind.
This revelation leads me to wonder how you feel about pre-pub’s moving to arXiv before peer review?
I can see myself compromising the author’s anonymity inadvertently just to provide a thoughtful peer review.
As with most privacy issues, it isn’t an issue until something goes wrong. So for example, most people don’t look at the metadata on electronic documents, so generally it is OK. However, if someone actively wants to know who wrote it, metadata is one way they can find out.
Luckily, it isn’t hard to scrub out metadata. A search for “delete metadata from [insert file format] on [insert computer platform]” will generally turn up useful information.
I don’t know enough about pre-pub’s moving to arXiv to comment, but I don’t see why it isn’t built into the uploading system. If it was an open platform, people would happily write and update scrubbers for most file formats, I think.
Pingback: Digital Footprints | TTakahas
Pingback: Publicizing your blog articles | Center for Infectious Disease Dynamics Graduate Student Association