Granted I don't know what you're actually trying to you - but as someone who's been scraping webpages for years I have to give you some unsolicited advice. I apologize in advance.
I would strongly urge you to transition over to something that can handle javascript. Mechanize is a great module, it was amazingly useful back in the day, but the web is all blinking lights, CSS and dancing babies you have to click.
The reason I say this, is that the 'hidden' fields could be something fancy, or they could be javascript modified forms that you'll waste hours trying to reverse engineer how it works just to hammer the square peg into the round hole.
The modern but unfortunately titanically heavy-weight replacements for Mechanize that I would suggest are:
phantomjs which provides a WebKit based javascript-centric way to interact with webpages (headlessly, which is a bonus) It's Qt based, but has solid release binaries and if you build from source it actually contains everything it needs to run without having to sync up with some specific version of Qt.
PySide bindings for QtWebKit which is nifty although there can be a bit of a learning curve but IMHO my favorite just because it's nice to be able to reach inside the browser and get my hands dirty to see whats going on.
WebKit also provides a nice (although, poorly supported by Python) interface where you can enable a websocket server in the browser and drive it over websockets using an API as defined in Inspector.json. Stock Chrome supports this out of the box. You can find more details on the Chrome developer website.
So, pretty much WebKit heavy, has nothing to do with what you're asking about - but in the long run this is where you're going to end up to be able to really automatically navigate and scrape the web.