Hey everyone. I’m a graduate student currently studying machine learning. I’ve had a decent amount of exposure to the field; I’ve already seen many students publish and many students graduate. This is just to say that I have some experience so I hope I won’t be discounted when I say with my whole chest: I hate machine learning conferences.

Everybody puts the conferences on a pedestal The most popular machine learning conferences are a massive lottery, and everyone knows this and complains about this, right? But for most students, your standing in this field is built off this random system. Professors acknowledge the randomness but (many) still hold up the students who get publications. Internships and jobs depend on your publication count. Who remembers that job posting from NVIDIA that asked for a minimum of 8 publications at top conferences?

Yet the reviewing system is completely broken Reviewers have no incentive to give coherent reviews. If they post an incoherent review, reviewers still have no incentive to respond to a rebuttal of that review. Reviewers have no incentive to update their score. Reviewers often have incentive to give negative reviews, since many reviewers are submitting papers in the same area they are reviewing. Reviewrs have incentive to collude, because this can actually help their own papers.

The same goes for ACs: they have no incentive to do anything beyond simply thresholding scores.

I have had decent reviewers, both positive and negative, but (in my experience) they are the minority. Over and over again I see a paper that is more or less as good as many papers before it, but whether it squeaks in, or gets an oral, or gets rejected, all seem to depend on luck. I have seen bad papers get in with faked data or other real faults because the reviewers were positive and inattentive. I have seen good papers get rejected for poor or even straight up incorrect reasons that bad, negative reviewers put forth and ACs follow blindly.

Can we keep talking about it? We have all seen these complaints many times. I’m sure to the vast majority of users in this sub, nothing I said here is new. But I keep seeing the same things happen year after year, and complaints are always scattered across online spaces and soon forgotten. Can we keep complaining and talking about potential solutions? For example:

  • Should reviewers have public statistics tied to their (anonymous) reviewer identity?
  • Should reviewers have their identities be made public after reviewing?
  • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.
  • Should institutions focus less on a small handful of top conferences?

A quick qualification This is not to discount people who have done well in this system. Certainly it is possible that good work met good reviewers and was rewarded accordingly. This is a great thing when it happens. My complaint is that whether this happens or not, seems completely random. I’m getting repetitive, but we’ve all seen good work meet bad reviewers and bad work meet good reviewers…

All my gratitude for people who have been successful with machine learning conferences but are still willing to entertain the notion that the system is broken. Unfortunately, some people take complaints like this as if they were attacks on their own success. This NeurIPS cycle, I remember reading an area chair complain unceasingly about reviewer complaints. Reviews are almost always fair, rebuttals are practically useless, authors are always whining…they are reasonably active on academic Twitter so there wasn’t too much pushback. I searched their Twitter history and found plenty of author-side complaints about reviewers being dishonest or lazy…go figure.

  • mofossB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    High Paying, Highly Innovative, Highly Hyped = Recipe for oversaturstion of students studying it, oversaturation of middle career folks switching their careers into it, recipe for an oversaturated number of non-tech folks completing every LLM, DL certificates to post on their LinkedIn.

    What happens with this oversaturation? Your raise the bar to entry - just like what leetcode culture did.

    Yay toxic elitism 🤸‍♂️

  • lexectedB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The system is quite broken, one could say that in its present state, it almost discourages genuine novelty of thought.

    But, it’s imperfect, first and foremost, because the people involved are imperfect. Reviewing is often a job assigned to the lowest performers in research groups, or traded by the highest performers (constantly on-big tech internships, building startups/open source models on the side) with their colleagues that have a somewhat more laid-back attitude to research excellence. You can submit a bad review and it will not come back to bite you, but in the age of reproducibility, a messed-up experiment or a poorly written/plainly incorrect paper that slips through the review system could be your end.

    The idea is that you enter the publishing game at the beginning of your PhD and emerge seeing through and being above the game once you’ve graduated. After all, you first have to master the rules of the game to be able to propose meaningful changes. It is just that once done, you might have a lot more incentives to switch to industry/consultancy and not care about the paper-citation game ever again.

    • MLConfThrowawayOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Definitely agree with everything you say. It’s unfortunate…I know the reviewers and people who want to move on after academia are not at fault, although they are often made to carry the extra burden of making the system more fair.

      • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.

      What do you think of this suggestion, by the way? I think if industry (and everyone really) recognizes reviewing as a good skill, it might slowly give good reviews more incentive.

    • we_are_mammalsB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      a messed-up experiment or a poorly written/plainly incorrect paper that slips through the review system could be your end

      Is that true? If your paper is totally wrong, publish a retraction, do not include the paper in your “list of publications”, and move on.