Frequently asked questions

  1. Is there a problem format?

    There are currently three problem domains, as described on the website, and each has its own problem format. The problem format for the first and second challenge domain have been released, and examples of how controller code provided by the teams will be run can be found in the User's Guide.

  2. What should submissions look like?

    This is a controller challenge, meaning that given an instance of a problem in a specified format, each team will submit a controller that provides a solution using an appropriate interface. Concretely, the interface is defined using ROS topics and services. Currently the only method of evaluation is through Monte Carlo (randomized, finite duration) trials. As such, controllers themselves are largely opaque entities. An example is provided in examples/sci_concrete_examples/scripts/lqr.py. We are happy to help troubleshoot any compatibility issues with submitted controller software. Though not planned for the first challenge, we are considering other messaging-based interfaces, e.g., using LCM, as well as solutions of particular forms, e.g., involving Mealy machines and discrete abstractions.

  3. How will submissions be evaluated?

    Currently the only available evaluation is entirely based on Monte Carlo trials, where scoring is obtained from satisfaction of the task specification and performance metrics. The duration for each trial is provided as part of the problem instance in some cases, and it is randomly generated in others. This statistical approach to evaluation has well-known limitations in deciding correctness of solutions. However, it permits a great variety of possible controllers because essentially no internal structure of solutions is assumed. We are considering other methods of evaluation that require certain forms of solutions and allow correspondingly stronger assertions.

  4. Where do we submit our controllers?

    Submissions will be in the form of a tarball containing a controller ROS package. We will issue more precise instructions on the submission process shortly -- stay tuned.

  5. When does this contest open up / what are the deadlines?

    • 1 May: dry-run trials for integrator_chains and the simulation variant of dubins_traffic
    • 11 May: first round of official competition trials for integrator_chains and the simulation variant of dubins_traffic
    • 16-19 May: second round of official competition trials for integrator_chains and the simulation variant of dubins_traffic, and the official competition for the physical variant of dubins_traffic (at ICRA in Stockholm, Sweden)

  6. Why multiple rounds?

    For the simulation variants, the best performance for each team over the two rounds will be used in determining rankings.

  7. Can teams participate in this contest remotely, i.e., while not being at ICRA?

    Teams that are not participating in the physical variant of dubins_traffic need not be present at ICRA!

Do you have more questions? Contact the organizers, ask on the mailing list, or recommend new FAQs.