I think about submissions a lot. Not just figuring out what I will send where next, but bigger picture stuff like how many submissions is “normal” or “enough.” How many acceptances equals success. Torturous questions like that, that don’t really have a real (i.e. definitive) answer. I adore definitive answers. Objective feedback. Hard and fast rules that tell me when things have worked and when they haven’t. With such a mindset, it’s hard to know why and how I ended up writing for a living – where I’m not sure certainty ever happens – but, you know, such is life.
Further to thinking about submissions a lot, I have this one particular friend (hi, Elizabeth!) I talk about submissions with a lot. She’ll come over for tea and a chat, and that chat will almost always turn to which journals are open, who has a good reputation for replying fast (or at all), and whose guidelines are completely incomprehensible.
Even further to this, it has not been unknown for me to start making notes during these chats or periods of intense thought. I will often look up my submission folder in my email inbox, or pull up one of my many spreadsheets. But in all this, I wondered, has anyone else perhaps looked into the submission process more thoroughly? Has anyone ever sat down and researched the stats behind this seemingly mysterious process of firing your word babies out into the void, hoping one of them will land somewhere and… I’m not honestly sure where I was going with this metaphor, ‘word babies’ is maybe one of the worst phrases I’ve ever written and I apologise, but I’m sure you catch my meaning.
It’s all well and good to torture yourself, wondering if the five submissions you made yesterday were “enough,” or if that one you spent two weeks on a month ago was “worth it.” I thought to myself, wouldn’t it be much more effective to torture yourself by the cruel and unusual means of comparison?
I started by sending a tweet into the void, asking if someone had indeed compiled such data. When no one replied, I started to compile it myself. I called it “informal research,” because I’m aware it has limitations. Namely, people forget things. We recollect things wrong. Even those of us who keep spreadsheets sometimes forget to update them. And there are twelve million factors that can affect a single number on a spreadsheet, offered without context. (E.g. ‘I sent out two submissions, and had two acceptances, but they were by online zines that pretty much accept everything/everyone’ vs ‘I submitted to twenty of the top literary journals in Iceland/Johannesburg/Venezuela that, by nature of being the top, have to reject 99.9% of the work they get, regardless of quality.’)
Some people who responded to this request for data (because yes, some people did actually reply!), did indeed offer context. And that was useful for my own brain when reading the numbers, but there was no quantitative way to represent it in the combined stats that wouldn’t require me A. going back and re-contacting the other respondents to get their context for comparison, and B. making the entire endeavour way too complicated.
I’ve already made this preamble overly dense, so let’s cut to the actual raw data. Here is the table I created detailing the info I was given:
Where I have highlighted numbers in yellow, I am indicating that I am considering them statistically significant, disregarding the ones that are crossed out. I should have clarified already that this is just for short-form pieces, which mainly fall into the categories of poetry and short stories. Novel submissions are whole other ball game.
Here is a breakdown of stats for Person One (myself):
Where I’ve given a number and then followed it up with a different number in brackets, I’m indicating a single publishing credit that included multiple pieces. I.e. in 2010 I had two poems published, but they were published together in the same place at the same time. And in 2014, I had seven poems published but only in three separate posts or publications.
One of the other things I was looking at with the breakdown of my own stats was if I had more acceptances the longer I’d been writing and submitting. As you can see, the answer is not really, but I think a lot of that is down to the types of places I’ve been sending my work to.
Now on to the million-dollar question: what does all this tell us?
Well, on the most basic level, the more you submit, the more you can and will get accepted. It’s not an exact science, but we knew that. When I’ve heard other people talk on the subject of submissions and rejections, they will often conclude with “it’s all a numbers game.” And I think they’re right. I think my very small sample size supports that theory.
Maybe the takeaway message from all this is the point I started with: there is no right way to submit work. No magical number of emails you must send to guarantee acceptance and/or success, if those even are the same thing (a different topic for a different day, perhaps). Looking for a right way is all well and good – you may find a way that is right for you, right now, which is wonderful – but it’s actually okay to change things up. To experiment. If there’s no one, definitive right way, it also means there’s no wrong way.
It’s very easy for me to sit here, having nitpicked numbers for the past hour, to now say ‘go forth and write, submit, and be merry. Don’t get too caught up in analysing the numbers!’ but, you know what? That’s exactly what I’m gonna say. But I’m saying it to myself as well as anyone else reading this. I’ve conducted this informal research A. so you don’t have to, but also B. so I can stop wondering about possibilities and satisfy myself with at least a few absolutes.
If you have any thoughts on this, I’d be very interested to know in the comment section below.