That depends on what you consider "optimal solution" and what the circumstances are. Given n <= 10, we have a case where even an n^3 process is O(1). I'd simply pre-process the graph with all partial paths and the corresponding list of values (filter for odd, sort the remainder, build a table of partial sums indexed by limit). This makes the query a simple indexed return: table[u, v, limit]. The table for the above graph would look something like below, where *
is an arbitrarily large limit.
1 2 [*: 1]
1 3 [*: 0]
1 4 [6: 0, *: 7]
1 5 [*: 1]
2 3 [*: 1]
2 4 [6: 1, *: 7]
2 5 [*: 2]
3 4 [6: 0, *: 7]
3 5 [*: 1]
4 5 [6: 1, *: 7]
You can build this table with any complete graph-distance algorithm.
UPDATE AFTER CHANGING UPPER LIMIT OF N
With N <= 10^5, the problem is still O(1), but the overhead is more than we want. Switch to other methods.
I recommend that you change the representation to be an actual tree: directed, with a root. If you want to reduce overall time, find the longest path and take the midpoint as the root node. Otherwise, just grab any node and shake. You can also get decent results by choosing a handful at random and taking the one that yields the least graph height.
While you form your tree, delete any even costs; this will save checking time later.
Now, working with a query is trivial: @Pratik already gave you the algorithm. Note, however, that any non-trivial quantity of queries may make it worthwhile to store partial paths with sorted costs: for instance, each node could carry a reference list of the path weight to each child node, with the weights sorted in descending order.