-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 18.9 KB
/
index.json
1
[{"authors":null,"categories":null,"content":"Shin Fujieda is a researcher mainly for ray tracing and machin learning, and a software development engineer of a GPU global illumination renderer called Radeon ProRender, in the team of ARR at AMD.\nBefore joining AMD, he was leading several projects using machine learning solutions for image processing at IBM. Previously, he received Master’s and Bachelor’s degrees in engineering both from the University of Tokyo.\n","date":1696118400,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1696118400,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"","publishdate":"0001-01-01T00:00:00Z","relpermalink":"","section":"authors","summary":"Shin Fujieda is a researcher mainly for ray tracing and machin learning, and a software development engineer of a GPU global illumination renderer called Radeon ProRender, in the team of ARR at AMD.","tags":null,"title":"Shin Fujieda","type":"authors"},{"authors":[],"categories":null,"content":" Click on the Slides button above to view the built-in slides feature. Slides can be added in a few ways:\nCreate slides using Wowchemy’s Slides feature and link using slides parameter in the front matter of the talk file Upload an existing slide deck to static/ and link using url_slides parameter in the front matter of the talk file Embed your slides (e.g. Google Slides) or presentation video on this page using shortcodes. Further event details, including page elements such as image galleries, can be added to the body of this page.\n","date":1906549200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1906549200,"objectID":"a8edef490afe42206247b6ac05657af0","permalink":"https://shinfj.github.io/talk/example-talk/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/talk/example-talk/","section":"event","summary":"An example talk using Wowchemy's Markdown slides feature.","tags":[],"title":"Example Talk","type":"event"},{"authors":["Shin Fujieda","Atsushi Yoshimura","Takahiro Harada"],"categories":null,"content":"","date":1696118400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1696118400,"objectID":"b83073ed0262069f085cae6525d3dee6","permalink":"https://shinfj.github.io/publication/local-positional-encoding-for-multi-layer-perceptrons/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/local-positional-encoding-for-multi-layer-perceptrons/","section":"publication","summary":"A multi-layer perceptron (MLP) is a type of neural networks which has a long history of research and has been studied actively recently in computer vision and graphics fields. One of the well-known problems of an MLP is the capability of expressing highfrequency signals from low-dimensional inputs. There are several studies for input encodings to improve the reconstruction quality of an MLP by applying pre-processing against the input data. This paper proposes a novel input encoding method, local positional encoding, which is an extension of positional and grid encodings. Our proposed method combines these two encoding techniques so that a small MLP learns high-frequency signals by using positional encoding with fewer frequencies under the lower resolution of the grid to consider the local position and scale in each grid cell. We demonstrate the effectiveness of our proposed method by applying it to common 2D and 3D regression tasks where it shows higher-quality results compared to positional and grid encodings, and comparable results to hierarchical variants of grid encoding such as multi-resolution grid encoding with equivalent memory footprint.","tags":[],"title":"Local Positional Encoding for Multi-Layer Perceptrons","type":"publication"},{"authors":["Shin Fujieda","Chih-Chen Kao","Takahiro Harada"],"categories":null,"content":"","date":1688169600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1688169600,"objectID":"20f6103353a3439145cedfa4ba68df9a","permalink":"https://shinfj.github.io/publication/neural-intersection-function/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/neural-intersection-function/","section":"publication","summary":"The ray casting operation in the Monte Carlo ray tracing algorithm usually adopts a bounding volume hierarchy (BVH) to accelerate the process of finding intersections to evaluate visibility. However, its characteristics are irregular, with divergence in memory access and branch execution, so it cannot achieve maximum efficiency on GPUs. This paper proposes a novel Neural Intersection Function based on a multilayer perceptron whose core operation contains only dense matrix multiplication with predictable memory access. Our method is the first solution integrating the neural network-based approach and BVH-based ray tracing pipeline into one unified rendering framework. We can evaluate the visibility and occlusion of secondary rays without traversing the most irregular and time-consuming part of the BVH and thus accelerate ray casting. The experiments show the proposed method can reduce the secondary ray casting time for direct illumination by up to 35% compared to a BVH-based implementation and still preserve the image quality.","tags":[],"title":"Neural Intersection Function","type":"publication"},{"authors":["Shin Fujieda","Takahiro Harada"],"categories":null,"content":"","date":1669852800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1669852800,"objectID":"3efe470b3b6ca507c4a0d271e1243402","permalink":"https://shinfj.github.io/publication/progressive-material-caching/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/progressive-material-caching/","section":"publication","summary":"The evaluation of material networks is a relatively resource-intensive process in the rendering pipeline. Modern production scenes can contain hundreds or thousands of complex materials with massive networks, so there is a great demand for an efficient way of handling material networks. In this paper, we introduce an efficient method for progressively caching the material nodes without an overhead on the rendering performance. We evaluate the material networks as usual in the rendering process. Then, the output value of part of the network is stored in a cache and can be used in the evaluation of the next materials. Using our method, we can render the scene with performance equal to or better than that of the method without caching, with a slight difference in the images rendered with caching and without it.","tags":[],"title":"Progressive Material Caching","type":"publication"},{"authors":["Shin Fujieda","Yusuke Tokuyoshi","Takahiro Harada"],"categories":null,"content":"","date":1648771200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1648771200,"objectID":"79821a9b5bf184128265661b0ddbc28f","permalink":"https://shinfj.github.io/publication/stochastic-light-culling-for-single-scattering-in-participating-media/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/stochastic-light-culling-for-single-scattering-in-participating-media/","section":"publication","summary":"We introduce a simple but efficient method to compute single scattering from point and arbitrarily shaped area light sources in participating media. Our method extends the stochastic light culling method to volume rendering by considering the intersection of a ray and spherical bounds of light influence ranges. For primary rays, this allows simple computation of the lighting in participating media without hierarchical data structures such as a light tree. First, we show how to combine equiangular sampling with the proposed light culling method in a simple case of point lights. We then apply it to arbitrarily shaped area lights by considering virtual point lights on the surface of area lights. Using our method, we are able to improve the rendering quality for scenes with many lights without tree construction and traversal.","tags":[],"title":"Stochastic Light Culling for Single Scattering in Participating Media","type":"publication"},{"authors":["Maria Ximena Bastidas Rodriguez","Adrien Gruson","Luisa F. Polania","Shin Fujieda","Flavio Prieto Ortiz","Kohei Takayama","Toshiya Hachisuka"],"categories":null,"content":"","date":1583020800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1583020800,"objectID":"f7f8aeff7e6310a54d618545cec48339","permalink":"https://shinfj.github.io/publication/deep-adaptive-wavelet-network/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/deep-adaptive-wavelet-network/","section":"publication","summary":"Even though convolutional neural networks have become the method of choice in many fields of computer vision, they still lack interpretability and are usually designed manually in a cumbersome trial-and-error process. This paper aims at overcoming those limitations by proposing a deep neural network, which is designed in a systematic fashion and is interpretable, by integrating multiresolution analysis at the core of the deep neural network design. By using the lifting scheme, it is possible to generate a wavelet representation and design a network capable of learning wavelet coefficients in an end-to-end form. Compared to state-of-the-art architectures, the proposed model requires less hyper-parameter tuning and achieves competitive accuracy in image classification tasks. The Code implemented for this research is available at https://github.com/mxbastidasr/DAWN_WACV2020.","tags":[],"title":"Deep Adaptive Wavelet Network","type":"publication"},{"authors":[],"categories":[],"content":"Create slides in Markdown with Wowchemy Wowchemy | Documentation\nFeatures Efficiently write slides in Markdown 3-in-1: Create, Present, and Publish your slides Supports speaker notes Mobile friendly slides Controls Next: Right Arrow or Space Previous: Left Arrow Start: Home Finish: End Overview: Esc Speaker notes: S Fullscreen: F Zoom: Alt + Click PDF Export Code Highlighting Inline code: variable\nCode block:\nporridge = \u0026#34;blueberry\u0026#34; if porridge == \u0026#34;blueberry\u0026#34;: print(\u0026#34;Eating...\u0026#34;) Math In-line math: $x + y = z$\nBlock math:\n$$ f\\left( x \\right) = ;\\frac{{2\\left( {x + 4} \\right)\\left( {x - 4} \\right)}}{{\\left( {x + 4} \\right)\\left( {x + 1} \\right)}} $$\nFragments Make content appear incrementally\n{{% fragment %}} One {{% /fragment %}} {{% fragment %}} **Two** {{% /fragment %}} {{% fragment %}} Three {{% /fragment %}} Press Space to play!\nOne Two Three A fragment can accept two optional parameters:\nclass: use a custom style (requires definition in custom CSS) weight: sets the order in which a fragment appears Speaker Notes Add speaker notes to your presentation\n{{% speaker_note %}} - Only the speaker can read these notes - Press `S` key to view {{% /speaker_note %}} Press the S key to view the speaker notes!\nOnly the speaker can read these notes Press S key to view Themes black: Black background, white text, blue links (default) white: White background, black text, blue links league: Gray background, white text, blue links beige: Beige background, dark text, brown links sky: Blue background, thin dark text, blue links night: Black background, thick white text, orange links serif: Cappuccino background, gray text, brown links simple: White background, black text, blue links solarized: Cream-colored background, dark green text, blue links Custom Slide Customize the slide style and background\n{{\u0026lt; slide background-image=\u0026#34;/media/boards.jpg\u0026#34; \u0026gt;}} {{\u0026lt; slide background-color=\u0026#34;#0000FF\u0026#34; \u0026gt;}} {{\u0026lt; slide class=\u0026#34;my-style\u0026#34; \u0026gt;}} Custom CSS Example Let’s make headers navy colored.\nCreate assets/css/reveal_custom.css with:\n.reveal section h1, .reveal section h2, .reveal section h3 { color: navy; } Questions? Ask\nDocumentation\n","date":1549324800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1549324800,"objectID":"0e6de1a61aa83269ff13324f3167c1a9","permalink":"https://shinfj.github.io/slides/example/","publishdate":"2019-02-05T00:00:00Z","relpermalink":"/slides/example/","section":"slides","summary":"An introduction to using Wowchemy's Slides feature.","tags":[],"title":"Slides","type":"slides"},{"authors":["Shin Fujieda","Kohei Takayama","Toshiya Hachisuka"],"categories":null,"content":"","date":1526774400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1526774400,"objectID":"ce0fd71c8606b788500b38578a5520a9","permalink":"https://shinfj.github.io/publication/wavelet-convolutional-neural-networks/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/wavelet-convolutional-neural-networks/","section":"publication","summary":"Spatial and spectral approaches are two major approaches for image processing tasks such as image classification and object recognition. Among many such algorithms, convolutional neural networks (CNNs) have recently achieved significant performance improvement in many challenging tasks. Since CNNs process images directly in the spatial domain, they are essentially spatial approaches. Given that spatial and spectral approaches are known to have different characteristics, it will be interesting to incorporate a spectral approach into CNNs. We propose a novel CNN architecture, wavelet CNNs, which combines a multiresolution analysis and CNNs into one model. Our insight is that a CNN can be viewed as a limited form of a multiresolution analysis. Based on this insight, we supplement missing parts of the multiresolution analysis via wavelet transform and integrate them as additional components in the entire architecture. Wavelet CNNs allow us to utilize spectral information which is mostly lost in conventional CNNs but useful in most image processing tasks. We evaluate the practical performance of wavelet CNNs on texture classification and image annotation. The experiments show that wavelet CNNs can achieve better accuracy in both tasks than existing models while having significantly fewer parameters than conventional CNNs.","tags":null,"title":"Wavelet Convolutional Neural Networks","type":"publication"},{"authors":["Shin Fujieda","Kohei Takayama","Toshiya Hachisuka"],"categories":null,"content":"","date":1500854400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1500854400,"objectID":"a500adc1ee1d183cd95f7a110c605452","permalink":"https://shinfj.github.io/publication/wavelet-convolutional-neural-networks-for-texture-classification/","publishdate":"2019-12-08T00:00:00Z","relpermalink":"/publication/wavelet-convolutional-neural-networks-for-texture-classification/","section":"publication","summary":"Texture classification is an important and challenging problem in many image processing applications. While convolutional neural networks (CNNs) achieved significant successes for image classification, texture classification remains a difficult problem since textures usually do not contain enough information regarding the shape of object. In image processing, texture classification has been traditionally studied well with spectral analyses which exploit repeated structures in many textures. Since CNNs process images as-is in the spatial domain whereas spectral analyses process images in the frequency domain, these models have different characteristics in terms of performance. We propose a novel CNN architecture, wavelet CNNs, which integrates a spectral analysis into CNNs. Our insight is that the pooling layer and the convolution layer can be viewed as a limited form of a spectral analysis. Based on this insight, we generalize both layers to perform a spectral analysis with wavelet transform. Wavelet CNNs allow us to utilize spectral information which is lost in conventional CNNs but useful in texture classification. The experiments demonstrate that our model achieves better accuracy in texture classification than existing models. We also show that our model has significantly fewer parameters than CNNs, making our model easier to train with less memory.","tags":null,"title":"Wavelet Convolutional Neural Networks for Texture Classification","type":"publication"},{"authors":null,"categories":null,"content":"","date":1461715200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1461715200,"objectID":"a7bf0fa84d881869e8a22e9328357173","permalink":"https://shinfj.github.io/project/prorender/","publishdate":"2016-04-27T00:00:00Z","relpermalink":"/project/prorender/","section":"project","summary":"AMD Radeon™ ProRender is a powerful physically-based path traced rendering engine that enables creative professionals to produce stunningly photorealistic images.","tags":["CG"],"title":"AMD Radeon™ ProRender","type":"project"},{"authors":["Shin Fujieda","Toshihiko Yamasaki","Kiyoharu Aizawa"],"categories":null,"content":"","date":1457568e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1457568e3,"objectID":"cee19826a291b39e146d6626114594e0","permalink":"https://shinfj.github.io/publication/efficient-human-pose-estimation-in-video-using-fast-object-detection/","publishdate":"2019-12-08T00:00:00Z","relpermalink":"/publication/efficient-human-pose-estimation-in-video-using-fast-object-detection/","section":"publication","summary":"本研究では、カーネル密度推定を用いて1フレーム中の複数の姿勢候補を統合するだけでなく、動画の時間連続性を考慮して前後フレームの姿勢推定結果も統合することで、より高精度に人物の姿勢推定を行う方法を提案する。さらに、高速に一般物体検出を行うための手法であるFaster R-CNNを用いてフレーム中の人物位置の特定を行うことにより、処理の高速化を図る。提案手法の有効性を確認するためにTED講演会で行われた複数の講演の動画に対して2つの既存手法と提案手法による推定を行った。結果として、提案手法はおよそ60%の精度で、既存手法に比べて3.4%~5.0%高精度に人物の姿勢推定を行うことができ、処理速度はおよそ2倍になることを示した。","tags":[],"title":"Efficient Human Pose Estimation in Video using Fast Object Detection","type":"publication"},{"authors":["Shin Fujieda","Toshihiko Yamasaki","Kiyoharu Aizawa"],"categories":null,"content":"","date":1448928e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1448928e3,"objectID":"e383d3105499f566c973ce5c78c49859","permalink":"https://shinfj.github.io/publication/human-pose-estimation-in-video-considering-temporal-consistency/","publishdate":"2019-12-08T00:00:00Z","relpermalink":"/publication/human-pose-estimation-in-video-considering-temporal-consistency/","section":"publication","summary":"We present a method to estimate a human pose in videos considering temporal consistency. In addition to the kernel density approximation based pose estimation for flexible mixtures-of-parts model, we extend the idea to the temporal domain. We conducted experiments with our proposed method on three videos. As a result, we demonstrate that the accuracy of our proposed method is 3.4-5.0% greater than that of previous approaches.","tags":[],"title":"Human Pose Estimation in Video Considering Temporal Consistency","type":"publication"}]