https://pathmind.com/wiki/generative-adversarial-network-gan Notes: Discriminative models learn the boundary between classes Generative models model the distribution of individual classes i.e. the discriminator decides whether each instance of data that it reviews belongs to the actual training dataset or not. Here are the steps a GAN takes: The generator takes in random numbers and returns an image. This generated image is fed into the discriminator alongside a stream of images taken from the actual, ground-truth dataset. The discriminator takes in both real and fake images and returns probabilities, a number between 0 and 1, with 1 representing a prediction of authenticity and 0 representing fake. https://ai.googleblog.com/2017/12/introducing-new-foveation-pipeline-for.html Notes: However, current VR/MR technologies present a fundamental challenge: to present images at the extremely high resolution required for immer...
Paper1: Renato Farias,Marcelo Kallmann. Improved Shortest Path Maps with GPU Shaders The paper designs two approaches with three types of models to realize rendering for points cloud. What inspired me is the potentials of application to WebGPU which currently counldn't have a fairly good performance as shader does in other fields. I don't really know the workflow of Web rendering for the time being, and also confused on what kind of restricition should I specifically jump into. Maybe a good start is search the terms like 'limitation of XXX', 'limitation of shaders',etc. Paper2: KERRY A. SEITZ, JR,TIM FOLEY,SERBAN D. PORUMBESCU,JOHN D. OWENS. Staged Metaprogramming for Shader System Development This is even a more advanced paper for me cuz it mentioned about metaprogramming and its super long with pages of codes. However, except metaprogramming, I feel it's readable for the rest of content, the main purpose is to improve the shader...
I tried many ways to fix the issues, but none of them worked, not even on github. One of the packages 'climin' has very rare information online and is barely used. Also, the compatibility between python version and blender is very uncertain, I'm not sure which one would work, so I gave up this AI stuff and tried to look into something new. https://www.youtube.com/watch?v=JVlfSjkvefE There is a image to material tool, materialize, which is able to edit a picture into material with diffuse,normal,metallic maps and etc. After you created the project, it's also easy to import to unity. Maybe I could build up something upon that.
Comments
Post a Comment