I had the opportunity to attend this year’s Google I/O in San Francisco. Among the loads of great information I was able to learn and take away from the sessions, one of the biggest “Aha!” moments came from the session Point, Click, Tap, Touch: Building Multi-Device Web Interfaces. In it, the presenters called out the problem of developers disabling mouse support if touch support is found. Doing so may result in a frustrating user experience for users whose devices offer multi-input support (e.g. Microsoft Surface, Chromebook Pixel). In short, developers should not assume a user will use one type of input over another.
Inspired by this (and having obtained a new Chromebook Pixel which proved to be a wonderful device to test on), I created a simple drag-and-drop demo that binds all mouse and touch events (even for the MS Surface), even allowing for multiple items to be moved at the same time. I invite you to check it out, view its source, and play around with it. Hopefully it is something you can reference for your own projects to improve the user experience across the range of devices.
A few notes about the demo
- All mouse and touch events are bound and handled appropriately.
- Users can drag more than one object at a time.
- On multi-input devices (Surface, Pixel), users can drag and drop using mouse and touch events at the same time.
- The JS is optimized to reduce layout thrashing, a common issue with coordinate-based layouts that reference offsetWidth (causes a layout) and/or style.width (invalidates a layout). This was a tip I learned from another Google I/O session, Device Agnostic Development.
- The rendering methods are performed in a render method, which is called via window.requestAnimFrame, rather than directly in the mouse/touch move handler. This further helps the JS optimization and keeps a more consistent frame rate.
- The JS is problem-free and has been verified at JSHint.com.
Until next time, happy coding!