#24 -Image Recognition Location Mapping

Before discussing the solution, I need to spend time identifying the problem. So before I begin research I’m watching a few YouTube on how to write problem statements – so that I can concisely define my project today.

Microsoft took an interesting approach when outlining a similar solution to the indoor navigation problem. They centered the problem around the difficulty of location accuracy indoors. They discussed the issues with GPS, further moving onto Beacon’s and then WiFi, noting none of them worked effectively. Through research they realized to focus on navigation instead of positioning. You may think navigation requires positioning, but not the way Microsoft creatively devised their solution. In which, they are offer a peer-to-peer approach that allows users to share their own data with those around them.

Different from Microsoft, The problem I want to focus on is the lack of communication between people within a single building and how it drastically decreases our awareness and predictability of how we will navigate the space around us.

I saw this guy at the airport yesterday, waving the Wet Floor sign to dry the floor – I think these markers have reached their day.

I saw this guy at the airport yesterday, waving the Wet Floor sign to dry the floor – I think these markers have reached their day.

For example, let’s take a look at Waze, an app so good that “Google bought it”. The real value that Waze creates over Google maps (over and above other mapping software such as Google Maps) is by crowd-sourcing and leveraging user engagement, which they incentivize via gamification techniques. The key to Waze’s value creation is human’s interpretation of events, such as accidents, traffic re-routing by police, etc., something a piece of software would not be able to accomplish. By maintaining an active user base, Waze is able to improve upon a driver’s routes, gas millage, while also unlocking greater value for other technologies, such as driverless cars.

The value Waze brings to consumers on the road has yet to be defined indoors, where (depending on the situation) a users focus is less about traveling from point A to B, but instead optimizing efficiency and safety within your space. I would like to stay away from using Beacon’s and GPS indoors - because it’s not too accurate - and focus more (like microsoft) on location awareness. Potential solutions for location awareness without using GPS is

a) Specifying a point of reference with text input (ie. Restaurant or more specifically Restaurant Patio)

b) Image Recognition, scan where you are and it matches up with another users image

c) Scanning QR code (users places qr code sticker where they placed a report)

My favorite iteration so far is image recognition - let me expand it out a bit more - in order for image recognition to work -need to do some research here - but definitely an interesting topic -

Perhaps we can couple image recognition with GPS indoors to improve its accuracy.

Mobile travel guide using image recognition and GPS/Geo tagging: A smart way to travel In a research paper by 4 authors (I don’t want to reference the names, but click the link to read more) they discuss a solution to using image recognition to offer a “smarter way to travel” since I can’t afford to pay for the article - I only read the abstract, but it has an interesting few sentences that describes their value-add - “The main function of this application is to recognize a monument or a famous spot from the picture clicked/uploaded by the user and to provide detailed information regarding it.” The purpose of their researched based application is to create an easily accessible database to provide more information about your surroundings by taking a photo of what your looking at –different from google maps and other related search engines, which require you to search through a list of whats nearby and to filter through what you’re looking for - this app saves you from that cumbersome process - and gives you information right away.

Regular process:

  1. Search Nearby or Current Location

  2. Look at a list of nearby attractions <- Painpoint

  3. Choose the attraction you are looking at

Their process:

  1. App automatically knows your current location

  2. Camera app opens - and you take a picture of what you’re looking at

  3. Information about the attraction appears

So this app is not actually reducing the amount of steps but making step 2 quicker.

This is not necessarily a solution for indoor positioning - but it’s an interesting alternative to finding out more information based on your location – the problem is how will the user know what to take pictures of - reports are generally more scattered and if the user knows to take a picture of it, they wouldn’t even need the app to begin with - how will the user know to turn the camera on.

An image recognition coupled with gps software could work, but the camera will need to be continuously on - as is the case with self driving cars.

Boom I found this research paper

Indoor navigation by image recognition “https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10420/104200H/Indoor-navigation-by-image-recognition/10.1117/12.2281569.short?SSO=1” I really want to read this but sadly it costs money - oy vey - i’m tempted to buy it -

There’s another patent which pairs image recognition with the gyroscope and accelerometer on the device for indoor navigation

Screen Shot 2018-12-01 at 6.32.48 PM.png

https://medium.com/@sabarish.gnanamoorthy/using-computer-vision-to-identify-positioning-for-navigation-with-arkit-76043cf40ee4

14 year old kit - using ar kit to identify placed triggers in the environemnt - based on the triggers you scan , the app will give you instructions on getting to that location - interesting solution if you are at an airport and you can scan your flight on the flipboard and it will give you instructions on getting there.

So there’s two things I need to figure out right now - first what is it the app is going to solve - my answer for the sake of simplicity is that we’re going to solve indoor safety Second, I need to figure out how to map the location so the user knows when to activate the alert - my two solutions based on image recognition are:

  1. Taking images of attractions in the environment and see if anything comes up (similar to attraction based image app)

  2. Having indoors scanned before hand ( not recommended)

  3. Parsing images together from multiple users to create an indoor depth map (cool but two different experiences based on if the data of image is available or not) this can also be paired with number 1

  4. Placing markers around the location to trigger actions (like the flip-board example I discussed above)

  5. Presave markers using “placenote”

I’m going to assume that I’ll be focusing on indoor safety - and I will decide which method is most effective for that category

It’s going to be a camera view - so this app will be augmented reality UI ( unless the camera is running in the background. But the camera should run the entire time and constantly be taking images and mapping the environment.

This should be possible - and ill do more research to back it up - but what if this app is specifically geared for airports and travel. Ok and that’s the new focus - airports and train stations etc… we can put camera’s in luggage that constantly record and pick up data- similar to camera’s in a self driving car - we can use the data to create image maps of indoor environments from multiple users and piece them together.

Navigate using built in iPhone sensors referenced by a specific location or starting point or navigate using image maps.

Pull up location information via image recognition or __

I still haven’t fully identified my problem lol. My problems are vast.

#25 - App Recreation

#23 - Gone flyin'