Those URLs are invalid because % in URLs indicates an encoded character which is followed by two hexadecimal characters (0-9, A-F), so / would be %2F, and a literal % would be %25. %% isn’t valid, so it really is a Bad Request.
I’d guess that you either have some bad internal links somewhere that are producing these URLs or it may be that Googlebot is too smart for its own good once again and tries to guess additional URLs by adding strings it found in Javascript to the current URL (look for {%defails%} in your source).
As for fixing it: you probably don’t want Google to index those URLs because they shouldn’t contain valid content (given that they’re not valid URLs), so maybe it’s best to use a rule in robots.txt to disallow those in general and Google won’t try to index them (but they may mention them in the “URLs not indexed” list, so I tend to look for what causes Google to “find” these URLs in the first place).