REINFORCEMENT
LEARNING
CONFERENCE

August 9-12, 2024
Amherst, MA

Blog Post

The why behind RLC

We’re so excited to launch the first version of the RLC, the new archival conference focused exclusively on reinforcement learning. There’s been a lot of attention, and we figured this would be a good moment to explain why we think now is the right moment to launch a new RL conference and talk a bit about what makes RLC different from other ML conferences. Besides the obvious, that we’re RL-focused, there’s a whole host of things that we’re trying to make this a uniquely worthwhile conference to submit to and attend.

Why a new archival RL conference?

This is the question we get the most. It’s a few things that make the RL community, and RL papers, somewhat distinct and that we think benefit from a conference being archival: - RL is not a monolithic topic; it’s a lot of subgroups roughly pursuing a similar topic. Having an archival RL conference lets us have sessions organized more appropriately to subgroups in the RL community. - It is beneficial to RL practitioners and researchers to have a dedicated venue to find top-quality work with top quality reviewing that follows high standards defined by the RL community. - In the longer-term, it will help the RL community develop those standards for reproducibility, best practices, etc. that are necessary to address challenges that are more important to RL.

Finally, a yearly archival RL conference provides an opportunity to create a coherent RL community. The RL community is gigantic and even though we all work on so many different problems, there are a lot of unifying themes in the work and concepts we can learn from each other. Researchers lucky enough to attend prior RLDM events have been overwhelmingly enthusiastic about it, which suggests that there’s a real need for a yearly event that brings the community together outside of the general ML conferences. This community building is also important because RL research is growing in size and complexity, which is resulting in larger papers and more tools needed to support research efforts.

Why should you submit?

The conference is new, yet we are already getting significant interest for submission details and where to submit work. The pragmatic case for submission is this: you submit to conferences to (1) make folks aware of your work and get feedback on it and (2) you attend conferences to have interesting conversations and learn about the state-of-the-art. We believe that there’s no better way to draw attention to an RL paper than submitting to RLC. In this first iteration, submitting your paper to RLC and receiving a spotlight or poster is a way to get it in front of a really large fraction of the RL community. These important connections are much more difficult at larger more general conferences. Due to their size, this will not occur except for a few papers when submitting it to a 15000 person conference like NeurIPS. The larger a conference is, the more power-law effects come into play and so a lot of focus gets placed on a smaller number of papers.

There are a few more reasons we’d like to highlight. Our review process (discussed below) places an extremely high technical bar for the inclusion of a paper. As such, we think having a paper accepted to RLC will constitute a mark of paper quality.

What we’re doing differently

Because we’re a new conference, we can take some exciting risks and try to fix long-standing issues we’ve seen in the ML community. The key change is creating a technically rigorous, but quite different and efficient review process that should set a high technical bar and help provide more useful reviews!

Review process

Everyone understands that the review process in ML is becoming challenging with increasingly overburdened reviewers starting to submit slipshod reviews. Rather than trying to fix the process with another change atop things, the RLC program chairs (Martha White, Adam White, and Feyral Behbahani) have come up with a new procedure that pares things down:

  • Rather than asking reviewers to judge the novelty or impact of an approach, which we think is something the research community will do on its own through citations, we’re just asking reviewers to judge the technical correctness of the work. This simplifies the job of reviewing and hopefully eases the review burden. A lower review burden means we can ask technical reviewers to provide even more thorough reviews.
  • Rather than ensure that you receive many reviews, we’re focusing on a smaller number of high-quality reviews. Reviews will come from more senior PhD students focused on technical correctness paired with a senior reviewer who will double-check the review and provide a more high-level perspective on the relationship of the paper to the field. We are also empowering the senior to throw out low quality technical reviews that simply add noise to the process.
  • As to the above, we are setting a high technical bar for a paper being correct. More information on this will come out soon.

We hope that providing good feedback makes the RLC submission process more informative and useful. Whether your paper gets in or not, you’ll receive detailed feedback from experts that allows you to improve the quality of your paper (just in time for NeurIPS if your paper doesn’t get in!). As a reviewer, you actually get to learn how to review from the senior reviewer. For much more detail on the review process there’ll be another blog post coming soon discussing all this in more detail and explaining how we came to this setup.

Some questions that come up frequently

What about RLDM?

We love RLDM and think it’s great. At the same time, we think it’s important to have something archival for the reasons outlined above.

What about EWRL?

We think EWRL is great too and are actively exploring ways to bring RLC and EWRL together. At the same time, we think it’s important to have a venue that is both archival and free to move from continent to continent.

Are you worried this will split up RL from the ML community?

There are over 10,000 RL papers written every year. More than enough of these will go to other ML conferences such that RL still remains a major part of those conferences while RLC can still provide a focused spotlight on good RL papers.