0

I'm using redis to cache my data and in order to create a key for the key value pair that gets stored in redis, I'm using the mongoose query stringified because of it's uniqueness, but I have queries that use Date.now() so the key keeps changing making it impossible to reference again later. Therefore I need a regex expression that removes all unix timestamps after "gte" or "lte" the gte and lte part can stay, just the timestamp that changes needs to go. Can anyone show me a regex expression to accomplish this? I appreciate any help!

Sample query

{"$and":[{"auctionType":{"$eq":"publicAuction"}},{"auctionEndDateTime":{"$gte":1583173635163}},{"blacklistGroup":{"$ne":"5e5a99fcd4a8685088b7937c"}},{"startTime":{"$lte":1583173635163}}],"collection":"listings"}
user6680
  • 79
  • 6
  • 34
  • 78
  • Why use regex when you can just remove any keys whose values are unix timestamps? Context with how you are using this "query" would help get you an answer too – Marie Mar 03 '20 at 17:16
  • The unix timestamps are already in place by the time I get to the query from the cache.js file in link I posted below. So I need to clean the data when it's a stringified query as ```const key```. Here's a related stack post I made that goes in more details: https://stackoverflow.com/questions/60380902/redis-wont-retrieve-data-from-cache – user6680 Mar 03 '20 at 17:22
  • You are caching the data, you can just remove them where you are caching them. You could just match any string of numbers after a colon but one day, if you add another number, that is going to come back to bite you. – Marie Mar 03 '20 at 17:50
  • I realize the unix timestamps can increase in length over time so I was hoping regex could solve that. Otherwise I'm open to other ideas if you have any, but I feel like regex is most appropriate for patterns – user6680 Mar 03 '20 at 17:52
  • The correct course of action is to go to where you save the data in the first place and filter your data before you use it as a key. Editing JSON as a string is never a good idea. – Marie Mar 03 '20 at 17:54
  • If you REALLY need to you can just match on a colon followed by 13 digits. Your code will work for 250+ years. Of course if you ever need to use a 64 bit key for something that code can fail. There is no way to tell whether 13 digits is a datetime or not. – Marie Mar 03 '20 at 18:01
  • The JSON data is being used as a key though for key value. It's not like it's the value itself that's getting manipulated. Also even if the unix timestamp increased by a digit, it would be in 400 years. I hope my website is that successful, but that would be a problem for my future great grandkids lol. I added a 0 to todays epoch https://www.epochconverter.com/ The only time I'm using a date is right after ```lte``` or ```gte``` couldn't that be the indicator to help identify the date in regex? – user6680 Mar 03 '20 at 18:02
  • Making sure you get lte/gte would definitely alleviate most of the concern. My point was specifically that this is not a good way regardless. It is easier to just NOT save those values as part of the key in the first place. – Marie Mar 03 '20 at 18:25
  • My challenge with that is I need the date to exist when hitting the db so that the cache gets the correct information stored, but when it stores that updated value to cache, it can't have the date because it changes. I can get rid of the date just by doing ```str.replace(dateValue, "")``` where it is stored in the "first place" in app.js where mongoose query gets called, but then I am struggling with passing the new key to ```mongoose.Query .prototype.exec = async function () {``` inside cache.js as a parameter that I can use assuming there is a date for that particular query. – user6680 Mar 03 '20 at 18:31

0 Answers0