Ad

Error In Creating A Function With A Trait-bounded Parameter Wrapped In Option Enum

- 1 answer

I want to create a method for my MyError enum (whose variants are the different error types in my program) that returns a String value that describes the given MyError variant. For example:

pub enum MyError {
    Error1,
    Error2,
}

impl MyError {
    pub fn to_str(&self) -> String {
        match self {
            Error1 => format!("Error1: bla bla bla"),
            Error2 => format!("Error2: na na na"),
        }
    }
}

This is all well and good, but the problem is, I have a new error variant (say Error3) that must pass a parameter to its format!() macro in the method, like this:

Error3 => format!("la la la {:?}", arg),

This parameter can be of any type as long as it can derive the Debug trait. So my solution was

pub enum MyError {
    Error1,
    Error2,
    Error3
}

impl MyError {
    pub fn to_str(&self, arg: Option<&impl fmt::Debug>) -> String {
        match self {
            Error1 => format!("bla bla bla"),
            Error2 => format!("na na na"),
            Error3 => format!("la la la {:?}", arg),
        }
    }
}

Where I wrap the trait-bounded parameter in Option since some variants of MyError do not need it (e.g. Error1). This works for the Error3 variant, I can do the following without any compilation error:

eprintln!("{}", MyError::Error3.to_str(Some(vec![1, 2, 3])));

It prints the associated error message of Error3. But when I try to use the method to the other variants which require no additional parameter, e.g. calling

eprintln!("{}", MyError::Error1.to_str(None));

It returns the following compilation error:

type annotations needed

cannot infer type for type parameter `impl fmt::Debug` declared on the associated function `to_str`rustc(E0282)

Why can't the compiler infer the type of None here?

Ad

Answer

The reason why it can't infer this is likely due to some annoying edge cases. The main one being determining the size of the enum being passed. The size of an enum type depends on the size of its largest variant. You can see this with assert_neq(std::mem::size_of::<Option<i32>>(), std::mem::size_of::<Option<i8>>());. Without knowing what concrete type it contains, we can't know how much space the function should use when passing the argument.

The easiest way is probably to allow the user to specify a concrete type as a type parameter. Even if you pass in None, you can still specify a concrete type as an alternative (Ex: error.to_str::<()>(None)).

impl MyError {
    pub fn to_str<D: fmt::Debug>(&self, arg: Option<&D>) -> String {
        // etc.
    }
}

Otherwise, I would suggest detaching the methods to simply avoid the issue entirely.

impl MyError {
    pub fn to_str(&self) -> String {
        match self {
            Error1 => format!("bla bla bla"),
            Error2 => format!("na na na"),
            Error3 => format!("la la la"),
        }
    }

    pub fn to_str_with_context(&self, arg: &impl fmt::Debug) -> String {
        match self {
            Error1 => format!("bla bla bla"),
            Error2 => format!("na na na"),
            Error3 => format!("la la la {:?}", arg),
        }
    }
}

Beyond that, I can only offer general coding tips. My first instinct is that you may find it more convenient to have a display wrapper.

pub struct MyErrorWithContext<'a, D> {
    err: &'a MyError,
    context: Option<&'a D>,
}

impl<'a, D: fmt::Debug> fmt::Display for MyErrorWithContext<'a, D> {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self.err {
            Error1 => write!(f, "bla bla bla"),
            Error2 => write!(f, "na na na"),
            Error3 => write!(f, "la la la {:?}", &self.context),
        }
    }
}

impl MyError {
    pub fn display<D: Debug>(&self, context: Option<&D>) -> MyErrorWithContext<D> {
        MyErrorWithContext {
            err: self,
            context,
        }
    }
}


println!("Error: {}", error.display(Some(&foo)));
Ad
source: stackoverflow.com
Ad