0

I've recently inherited a project for the iPad which is basically an image/presentation viewer. After selecting the presentation the user wishes to view, the app basically shows the images, one by one, changing when the user swipes up or down, or when they tap one of the arrows in the corners of the screen. However, some of the images in the presentation have the pagination dots, with arrows on either side of them. The images are in order, and as you swipe from one to the next, the dot moves. What our client would like is to be able to use the dots and arrows, which are part of the static image, for navigation. Meaning when they hit a dot, it takes them to the appropriate page.

The early, early, original version of this application had a huge plist file with information on all the images in the app, including if there were any of these carousel views, and where they would go. This has been long gone now, and there are many, many more images/presentations in the app than there were then.

Is there a better way to determine where to listen for touch? Or should I resign myself to writing a text file, and knowing I'd have to edit that when the content changes? I've been told that the content shouldn't change that often.

durron597
  • 31,968
  • 17
  • 99
  • 158
s73v3r
  • 1,751
  • 2
  • 22
  • 48
  • So all the dots are at fixed places in the image and the number of dots are fixed for all the images? – iDev Oct 11 '12 at 21:07
  • Sadly no, it's not that easy. In a presentation, there is anywhere from a handful of images, to a couple that have upwards of 30. In a presentation, if there is a carousel view, it would only span a subset of the images in a presentation. The location of the dots can vary based on the presentation, but a series of images in the same presentation with the same dots will have them in the same place. – s73v3r Oct 11 '12 at 21:10
  • So ideally you need some algorithm to detect the dots in an image right? Check this http://stackoverflow.com/questions/12145450/analyze-image-and-find-dots-in-ios and this http://dsp.stackexchange.com/questions/2644/iphone-ios-uiimage-how-to-detect-a-laser-pointer-dot-on-a-camera-feed – iDev Oct 11 '12 at 21:13
  • I just saw you accepted my answer - did you actually use the idea? I do think it would work, and am now really curious... – David H Feb 26 '13 at 19:27
  • I started to do something like that, although the locations were stored in a plist. Then the project fell through. – s73v3r Feb 27 '13 at 17:33

1 Answers1

0

You need to create a mapping of an image to a dot location, and do it dynamically as images are eligible for view and remove it when the image goes out of scope. If you are using UIImageViews you can use the tag field for this, if not - if using raw CGImageRefs or UIImages work too.

You create a mutableDictionary, and use the key to map to the image. If a UIImageView this can be a NSNumber representing the tag, and if images then you can use a NSValue object set to the address of the image. When the image goes into scope, you add an entry to the dictionary that has the image key, and either a single NSNumber to identify the dot location, or a string representation of the hit rectangle (recalling you can may CGRects to strings and vice versa with NSStringFromCGRect() etc).

When you get a tap on an image, you get the key for the relevant image, ask for the locator from the mutable dictionary, and allow or disallow it. You will need to know the frame of the image in whatever container it is in as whatever is doing tap recognition is probably not the image itself, so you can get a local image offset.

David H
  • 40,852
  • 12
  • 92
  • 138