I have an application that monitors the URL of a browser. Currently I'm using chrome. I followed the solution from this answer:
public static string GetChromeUrl(Process process) {
if (process == null)
throw new ArgumentNullException("process");
if (process.MainWindowHandle == IntPtr.Zero)
return null;
AutomationElement element = AutomationElement.FromHandle(process.MainWindowHandle);
if (element == null)
return null;
AutomationElement edit = element.FindFirst(TreeScope.Children, new PropertyCondition(AutomationElement.ControlTypeProperty, ControlType.Edit));
return ((ValuePattern)edit.GetCurrentPattern(ValuePattern.Pattern)).Current.Value as string;
}
Then I have this method to log the URL in a text file (using log4net):
while (true) {
foreach (Process process in Process.GetProcessesByName("chrome")) {
string url = GetChromeUrl(process);
if (url == null)
continue;
Console.WriteLine(url); //for viewing purpose, it actually logs in orig code
}
Thread.Sleep(1000);
}
It works pretty well. But somehow there's an inconsistency with the GetChromeUrl
method. Sometimes it returns null
, and this is my big problem. Does anyone has a better solution?