Forum

Constraining / Editing Camera Input

07 May 2015 03:24
Hello,

I'm a Unity3d .Net developer, attempting the dive into this new and really fantastic technology. I'm thrilled by how well the demos look across browsers and platforms - as it outperforms the new WebGL build by a long shot (so far).

I'm attempting a personal project which works similarly to the solar system application, and now that I have spent a few days trying to work out the system (without much progress), I thought I would seek the community's help.

I have inferred that by adding modules such as "controls" that I would have access (similarly to app.js) to things like touch and mouse input. However, I've been trying to hack something together and have so far failed.

What I would first like to do is just console.log() a message indicating that a touch occurred. From there, I'll be editing the existing control scheme to clamp or constrain the camera on some axis.

JS is not my strong suit, especially with web stack.

Have a little time? I'd love a bit of guidance.

Thanks!
Looking forward to going full 3d.
07 May 2015 09:52
Hello and welcome to forum!


I have inferred that by adding modules such as "controls" that I would have access (similarly to app.js) to things like touch and mouse input.

You can use main canvas events handlers to have access to screen touch or mouse events. For example:
function init_cb(canvas_elem, success) {
// . . . 
if (!m_main.detect_mobile())
       canvas_elem.addEventListener("mousedown", main_canvas_down);
    canvas_elem.addEventListener("touchstart", main_canvas_down);
// . . . 
}
function main_canvas_down(event) {
// your actions
}


You can look functions call order in this tutorial.

Module "controls" is used for interaction between objects.

What I would first like to do is just console.log() a message indicating that a touch occurred.

If you use mobile device you can use alert("TOUCH"); function to check touch event. Example:

function init_cb(canvas_elem, success) {
// . . . 
if (!m_main.detect_mobile())
       canvas_elem.addEventListener("mousedown", main_canvas_down); //mouse event
    canvas_elem.addEventListener("touchstart", main_canvas_down);  //touch event
// . . . 
}
function main_canvas_down(event) {
alert("TOUCH"); 
}

07 May 2015 10:24

If you use mobile device you can use alert("TOUCH");

Also you can use browser debugging for mobile devices.

Take a look at this example:
example.zip
07 May 2015 18:14
Fantastic, thank you for your very rapid reply.

In my haste, I looked around at more example projects, and it looks like the Flag cloth simulation : does pretty much everything I need in regards to constraints.

Interestingly, those constraints are established within Blender, instead of in the code.

Confusingly, there seems to be far less JS code in this example. It seems as if the touch code is compiled and somewhat obfuscated to within the html.

That leads me to another question. I see that the JS code for touches is within the flag_caches_mix.html. Where might that have been compiled from? Presumably, the programmer responsible did not write the code in this way.

Thanks. Eager to learn and work towards more ends with this SDK!
Looking forward to going full 3d.
07 May 2015 19:03
"Flag cloth simulation" has been created without programming. There are two export ways: json and html (setting up the addon & html-export)

"Flag cloth simulation" was created by html-export.
Html-export uses our obfuscated "Webplayer" application (you can find it in our SDK: SDK/deploy/apps/webplayer) for watching scenes.

Also, you can find non-obfuscated webplayer: SDK/apps_dev/webplayer
08 May 2015 22:29
Great!

Here is my progress thus far:
www.streamfall.com/Demo/campfire.html

Eventually I am hoping to make this page into an interactive homepage for my small dev studio.

two questions :
  • If I plan to work towards adding buttons and such, would it be optimal to continue trying to do so in the Blender project? I do find the engine to be a comfortable place, and constraints and such do make it convenient. Or, should I work towards trying to do so with one of the json export options as a template?

  • I'll need to be able to extend the current project. I suppose I'll need to do that using Py, and figure out how to interface Py classes with Blender. I've not done that previously, so I suppose I'll need to look at standard Blender3d coding tutorials. Is that assumption correct? I see there are options for visual scripting..

  • Thank you for your help!
    Looking forward to going full 3d.
    10 May 2015 20:01
    Hello!

    Considering your task I think you have two options here. First, implement all in 3D and do all scripting using our NLA Script tool. This is fast but not always the best solution, because if you will need something complicated it will be almost impossible do so without actual coding. Second option is to implement all scene logic using JavaScript and page interface using HTML/CSS. This solution is definitely for pro users, allowing them to do anything they want.

    About Python. This language is used primarily for Blender scripting, so it's impossible (almost ) to use it for web development. The only choice here is JavaScript (as well as HTML and CSS).
     
    Please register or log in to leave a reply.