The compiler allows that but I am not sure if that will get me in trouble. Can someone please clarify if it will cause issues.
Most of the time this is a reasonable practice. I often use partial classes to separate out an interface implementation, or the implementation of a nested class, and you should feel confident doing so.
However, there is one situation where you can be surprised by the practice you describe of explicitly calling out an implemented interface, and that is the interface re-implementation rule.
It is best described with a little example. Suppose you have
interface I { void Foo(); }
// B does not implement I.
class B { public void Foo() {} }
class C : B, I { }
That's perfectly legal. C implements I, which means C must have method Foo, and C does have method Foo; it inherited it from B.
Now consider:
// D implements I because D derives from C
class D : C { public new void Foo() {} }
// The I is unnecessary here, right? Or is it?
class E : D, I { }
The interface re-implementation rule is: because E explicitly says that it implements I, the compiler re-does the analysis to determine which method Foo matches I.Foo. Summing up:
((I)(new C())).Foo()
calls B.Foo
((I)(new D())).Foo()
calls B.Foo
((I)(new E())).Foo()
calls D.Foo
This can be surprising, and it can happen unexpectedly when you unnecessarily state an interface. It almost never happens, and it almost never matters when it does, but you asked for any possible issue, and this is such a situation.
Now let's consider your follow-up question:
Clearly the two scenarios are different since the compiler allows one and not the other. There must be a reason.
As we've already seen, C#'s designers wanted a way to express "I've introduced a new member that I want to be bound to the interface members", and the somewhat obscure choice they made was to allow this redundant declaration to mean "do the re-binding now".
However, there is another design reason to allow that sort of redundancy, and moreover, to not warn about it when it happens.
A good practice to get into when thinking about odd C# design decisions is to remember that the designers of C# are explicitly worried at all times about stuff changing in versioned software. In particular, they worry about many different forms of "brittle base class failure". That is the failure mode where someone makes a change that is perfectly reasonable to a base class, which then causes a derived class to surprisingly break or behave strangely.
Consider for example this sequence of events. First, developer X writes:
interface IA { void Foo(); }
and then developer Y writes
class C { public void Foo() {} }
and then developer Z writes:
class D : C, IA { }
and then developer Y realizes, oh, I implemented the contract of IA, I can get that for free, and changes class C to
class C : IA { }
So now the question is: should the compiler give a warning on class D for being redundant? And the answer is no, the author of class D did nothing wrong, and should not be made to do work to fix a problem that doesn't actually exist.
That's why C# tends to be pretty tolerant of redundancy, though unfortunately it is not always consistently tolerant. (Re-statements of generic constraints, for example, are illegal but in my opinion should be allowed.)
In short, any time you think "C# could be telling me about this redundancy" ask yourself what if it was caused by someone else making a change that I don't control? Would that be annoying or helpful? If annoying, the language design probably suppresses the warning deliberately.