-
- [Research] A paper from Prof. Lee jinkyu's lab. Papers was approveed by publication at ACM/IEEE DAC 2024, IEEE RTAS 2024
- 제목: 이진규 교수 연구실(실시간 컴퓨팅 연구실, RTCL@SKKU) ACM/IEEE DAC 2024, IEEE RTAS 2024 논문 발표 실시간 컴퓨팅 연구실(지도교수: 이진규)에서 작성한 논문이 ACM/IEEE DAC 2024 (the 61th Design Automation Conference)와 IEEE RTAS 2024 (30th IEEE Real-Time and Embedded Technology and Applications Symposium)에 발표되었습니다. ACM/DAC은 Design Automation 분야의 Top1 국제 학술대회(정보과학회 최우수 등급, BK21+ IF3)이고 올해는 미국 샌프란시스코에서 2024년 6월 23일~27일 개최되었며, IEEE RTAS는 실시간 시스템 분야의 Top2 국제 학술대회(정보과학회 최우수 등급, BK21+ IF2)이며 올해는 홍콩에서 2024년 5월 13일~16일 총 29편의 논문이 발표되었습니다. ACM/IEEE DAC 2024 논문은 MCU등 소형 IoT 기기에서 인공지능 응용 작업 실행에 대한 실시간성 보장을 다루고 있으며, 실시간 컴퓨팅 연구실 석사과정 강석민 학생(제1저자), 박사과정 이성태 학생(공동제1저자), 학부과정 구현우 학생이 이진규 교수의 지도하에 참여하였고, DGIST 좌훈승 교수와의 공동연구로 진행되었습니다. IEEE RTAS 2024 논문은 메모리 부족 환경에서의 인공지능 응용 작업의 실행에 대한 실시간성 보장을 다루고 있으며, DGIST 좌훈승 교수 연구팀 주도하에 이진규 교수가 참여하였습니다. ACM/IEEE DAC 2024 홈페이지 https://www.dac.com/ IEEE RTAS 2024 홈페이지 https://2024.rtas.org/ 실시간 컴퓨팅 연구실 홈페이지 https://rtclskku.github.io/website/ - 논문제목: RT-MDM: Real-Time Scheduling Framework for Multi-DNN on MCU Using External Memory - Abstract: As the application scope of DNNs executed on microcontroller units (MCUs) extends to time-critical systems, it becomes important to ensure timing guarantees for increasing demand of DNN inferences. To this end, this paper proposes RT-MDM, the first Real-Time scheduling framework for Multiple DNN tasks executed on an MCU using external memory. Identifying execution-order dependencies among segmented DNN models and memory requirements for parallel execution subject to the dependencies, we propose (i) a segment-group-based memory management policy that achieves isolated memory usage within a segment group and sharded memory usage across different segment groups, and (ii) an intra-task scheduler specialized for the proposed policy. Implementing RT-MDM on an actual system and optimizing its parameters for DNN segmentation and segment-group mapping, we demonstrate the effectiveness of RT-MDM in accommodating more DNN tasks while providing their timing guarantees. - 논문제목: RT-Swap: Addressing GPU Memory Bottlenecks for Real-Time Multi-DNN Inference - Abstract: The increasing complexity and memory demands of Deep Neural Networks (DNNs) for real-time systems pose new significant challenges, one of which is the GPU memory capacity bottleneck, where the limited physical memory inside GPUs impedes the deployment of sophisticated DNN models. This paper presents, to the best of our knowledge, the first study of addressing the GPU memory bottleneck issues, while simultaneously ensuring the timely inference of multiple DNN tasks. We propose RT-Swap, a real-time memory management framework, that enables transparent and efficient swap scheduling of memory objects, employing the relatively larger CPU memory to extend the available GPU memory capacity, without compromising timing guarantees. We have implemented RT-Swap on top of representative machine-learning frameworks, demonstrating its effectiveness in making significantly more DNN task sets schedulable at least 72% over existing approaches even when the task sets demand up to 96.2% more memory than the GPU’s physical capacity. 이진규 | jinkyu.lee@skku.edu | 실시간컴퓨팅 Lab. | https://rtclskku.github.io/website/ Title: Papers from Prof. Jinkyu Lee’s Lab. (RTCL@SKKU) published in ACM/IEEE DAC 2024 and IEEE RTAS 2024 A paper from RTCL@SKKU (Advisor: Jinkyu Lee) has been published in ACM/IEEE DAC 2024 and IEEE RTAS 2024. ACM/IEEE DAC 2024 Website https://www.dac.com/ IEEE RTAS 2024 Website https://2024.rtas.org/ Real-Time Computing Lab. Website https://rtclskku.github.io/website/ - Paper Title: RT-MDM: Real-Time Scheduling Framework for Multi-DNN on MCU Using External Memory - Abstract: As the application scope of DNNs executed on microcontroller units (MCUs) extends to time-critical systems, it becomes important to ensure timing guarantees for increasing demand of DNN inferences. To this end, this paper proposes RT-MDM, the first Real-Time scheduling framework for Multiple DNN tasks executed on an MCU using external memory. Identifying execution-order dependencies among segmented DNN models and memory requirements for parallel execution subject to the dependencies, we propose (i) a segment-group-based memory management policy that achieves isolated memory usage within a segment group and sharded memory usage across different segment groups, and (ii) an intra-task scheduler specialized for the proposed policy. Implementing RT-MDM on an actual system and optimizing its parameters for DNN segmentation and segment-group mapping, we demonstrate the effectiveness of RT-MDM in accommodating more DNN tasks while providing their timing guarantees. - Paper Title: RT-Swap: Addressing GPU Memory Bottlenecks for Real-Time Multi-DNN Inference - Abstract: The increasing complexity and memory demands of Deep Neural Networks (DNNs) for real-time systems pose new significant challenges, one of which is the GPU memory capacity bottleneck, where the limited physical memory inside GPUs impedes the deployment of sophisticated DNN models. This paper presents, to the best of our knowledge, the first study of addressing the GPU memory bottleneck issues, while simultaneously ensuring the timely inference of multiple DNN tasks. We propose RT-Swap, a real-time memory management framework, that enables transparent and efficient swap scheduling of memory objects, employing the relatively larger CPU memory to extend the available GPU memory capacity, without compromising timing guarantees. We have implemented RT-Swap on top of representative machine-learning frameworks, demonstrating its effectiveness in making significantly more DNN task sets schedulable at least 72% over existing approaches even when the task sets demand up to 96.2% more memory than the GPU’s physical capacity. Jinkyu Lee | jinkyu.lee@skku.edu | RTCL@SKKU | https://rtclskku.github.io/website/
-
- 작성일 2024-07-01
- 조회수 2510
-
- [Research] 우사이먼성일 교수(DASH 연구실), Won PAKDD 2024 Best Paper Running-Up Award (2nd Place)
- DASH Lab won the Best Paper Running-Up Award (2nd Best Paper) at PAKDD 2024 in Taiwan Binh M. Le and Simon S. Woo’s paper, “SEE: Spherical Embedding Expansion for Improving Deep Metric Learning,” received the the Best Paper Running-Up Award (2nd best paper) in PAKDD 2024 (BK CS IF=1), held in Taipei in May 2024. Here is the background information about the award: “This year, PAKDD received 720 excellent submissions, and the selection process was competitive, rigorous, and thorough with over 500 PC and 100 SPC members. An award committee was formed by a chair and four committee members from different countries. There are only one Best Paper Award, two Best Paper Running-Up Awards, and one Best Student Paper Award.” Paper Link: https://link.springer.com/chapter/10.1007/978-981-97-2253-2_11
-
- 작성일 2024-06-07
- 조회수 2755
-
- [Faculty] [23.12.19] Lee Eun-seok speech at the 2nd Digital Sprout Conference
- On December 19, 2023, Professor Lee Eun-seok of Sungkyunkwan University (Software Convergence University President/SW Central University Project Director) gave a lecture on "The role of universities in the paradigm shift of education" at the second conference in Gangnam, hosted by the Ministry of Education, 17 municipal and provincial offices of education, and the Korea Science and Creativity Foundation.
-
- 작성일 2023-12-26
- 조회수 2912
-
- [Student] The AI Convergence Major Idiots and RMDP Team won the Gold Award and Special Award at the 1st Seoul Metropolitan Univers
- AI convergence major idiots (Kim Myung-seop, Lee Soo-hyung, Cho Jung-hwan), RMDP (Kim Soo-min, Kim Hyun-seo, Jang Woo-hyuk), Won the Gold Award and Special Award at the 1st Seoul Metropolitan Universityies Artificial Intelligence (SCAICO) Combined Competition Kim Myung-seop, Lee Soo-hyung, Cho Jung-hwan, Kim Soo-min, Kim Hyun-seo, and Jang Woo-hyuk (RMDP team), who majored in artificial intelligence convergence at our university's Department of Global Convergence, won the gold prize (2nd place) and the special prize (SKT special prize) at the first Seoul National University Artificial Intelligence Coalition Competition held at Sejong University's Ocean AI Center on November 19 (Sun). The Seoul Central Artificial Intelligence Competition (S.C.A.I.C.O.) was organized by the SW-Centered University Project Group of the school, the Sejong University SW-AI-Centered University Promotion Group, the Catholic University LINC3.0 Project Group, the Catholic University of Digital Transformation ICC, the Konkuk University SW-Centered University Project Group, the Kyung Hee University SW-Centered University Project Group, and the Inha University SW-Centered University Project Group, hosted by the student council of each university's related departments and sponsored by 3M, SK Telecom and DEVOCEAN. A total of 13 teams from eight universities in the Seoul metropolitan area, including the school, participated through internal preliminary rounds for each university. SK Devocean Kim Sang-ki, Team Idiots (Artificial Intelligence Convergence Major, 22nd grade Lee Soo-hyung, 20th grade Kim Myung-seop, 22nd grade Cho Jung-hwan) The competition was held under the theme of producing achievements using AI-related keywarts (Trend Korea 2023) in 2023. The daft team that won the gold prize (2nd place) was awarded KRW 1,000,000 and the award was awarded KRW 500,000 and the award was awarded to the RMDP team that won the SKT Special Award. The grand prize (1st place) was awarded to the KHAI team of Kyung Hee University, and the silver prize (3rd place) was awarded to the brown sugar team of Sejong University. In particular, this competition is significant in that the student council of each department organized and hosted the event, leading the academic exchange.
-
- 작성일 2023-12-26
- 조회수 2762
-
- [Research] One paper by Intelligent Embedded Systems Laboratory (supervisor: Shin Dong-gun) has been approved for publication at AA
- One paper by Intelligent Embedded Systems Laboratory (supervisor: Shin Dong-gun) has been approved for publication at AAAI Conference on Artificial Intelligence 2024 (AAAI-24), an excellent academic conference in the field of artificial intelligence 논문 #1: Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens (Lee Sang-ho, master's degree in artificial intelligence, and Lee Ha-yoon, Ph.D. in electrical, electronic, and computer engineering) The paper "Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens" focuses on the complexity of self-attention operations. In this paper, to solve the quadratic complexity of the input sequence length n of the existing self-attention operation, we propose an extended Nystrom attraction method by integrating the Nystrom method with neural memory. First, by introducing a learnable proxy token, which serves as a landmark of the Nystrom method, we reduce the complexity of the attraction operation from square to linear, and effectively create landmarks that take into account the input sequence. Second, we learn to effectively restore the attraction map using minimal landmarks by applying continuous learning. Third, we develop a dropout method suitable for the decomposed attraction matrix, enabling the normalization of the proxy tokens to be effectively learned. The proposed proxyformer effectively approximates the attention map with minimal proxy token, which outperforms existing techniques in LRA benchmarks and achieves 3.8 times higher throughput and 0.08 times lower memory usage in 4096-length input sequences compared to traditional self-attention methods. [Thesis #1 Information] Proxyformer: Nystrom-Based Linear Transformer with Trainable Proxy Tokens Sangho Lee, Hayun Lee, Dongkun Shin Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI), 2024 Transformer-based models have demonstrated remarkable performance in various domains, including natural language processing, image processing and generative modeling. The most significant contributor to the successful performance of Transformer models is the self-attention mechanism, which allows for a comprehensive understanding of the interactions between tokens in the input sequence. However, there is a well-known scalability issue, the quadratic dependency of self-attention operations on the input sequence length n, making the handling of lengthy sequences challenging. To address this limitation, there has been a surge of research on efficient transformers, aiming to alleviate the quadratic dependency on the input sequence length. Among these, the Nyströmformer, which utilizes the Nyström method to decompose the attention matrix, achieves superior performance in both accuracy and throughput. However, its landmark selection exhibits redundancy, and the model incurs computational overhead when calculating the pseudo-inverse matrix. We propose a novel Nyström method-based transformer, called Proxyformer. Unlike the traditional approach of selecting landmarks from input tokens, the Proxyformer utilizes trainable neural memory, called proxy tokens, for landmarks. By integrating contrastive learning, input injection, and a specialized dropout for the decomposed matrix, Proxyformer achieves top-tier performance for long sequence tasks in the Long Range Arena benchmark.
-
- 작성일 2023-12-26
- 조회수 3573
-
- [Research] Professor Jae-pil Heo's lab was approveed by publication of four AAAI 2024 papers
- Professor Jae-pil Heo's lab was approveed by publication of four AAAI 2024 papers Four papers from the Visual Computing Laboratory (Professor: Jae-pil Heo) have been approved for publication at the AAAI Conference on Artificial Intelligence 2024 (AAAI-24), an excellent academic conference in the field of artificial intelligence. 논문 #1: “Towards Squeezing-Averse Virtual Try-On via Sequential Deformation” (Ph.D. program in the Department of Artificial Intelligence Shim Sang-heon, master's program in the Department of Artificial Intelligence Jung 논문 #2: "Noise-free Optimization in Early Training Steps for Image Super-Resolution" (Lee Min-gyu, Ph.D. in Artificial Intelligence) 논문 #3: “VLCounter: Text-aware Visual Representation for Zero-Shot Object Counting” (Kang Seung-gu, Master of Artificial Intelligence, Doctor of Artificial Intelligence, Doctor of Artificial Intelligence, and Master of Artificial Intelligence Kim Eui-yeon) 논문 #4: “Task-disruptive Background Suppression for Few-Shot Segmentation” (Park Soo-ho, Ph.D. in software/mechanical engineering, Lee Soo-bin, Ph.D. in artificial intelligence, Hyun Jin-ik, Ph.D. in artificial intelligence, and Sung Hyun-seok) The paper "Towards Squeezing-Averse Virtual Try-On via Sequential Deformation" addresses the issue of visual quality degradation in the field of high-resolution virtual mounted image generation. Specifically, there was a problem with the texture of clothes being squeezed in the sleeve, as shown in the upper row of Fig. 1(a). The main reason for this problem is the gradient collision between the total variance loss (TV) loss and the adversarial loss, which are two loss functions that are essentially used in the field. TV loss aims to separate the boundary between the sleeve and torso from the warped clothing mask, while adversarial loss aims to engage between the two. These opposite goals feed back the false slope to cascaded appearance flow estimation, resulting in sleeve compression artifacts. To address this, in this paper, we approached from the perspective of interlayer connections in the network. Specifically, it was diagnosed that sleeve compression occurs because the conventional cascading appearance flow estimation is connected with a residual connection structure and is heavily influenced by the adversarial loss function, and to reduce this, a sequential connection structure between cascading appearance flows was introduced into the last layer of the network. Meanwhile, the lower row in Figure 1(a) shows different types of compression artifacts around the waist. To address this, in this study, we propose to warp into the style first worn when warping clothes (tucked-out shirts style), and then partially delete the texture from the initial warping result, and implement the computation for this. The proposed technique confirms that both types of artifacts are successfully resolved. In the paper "Noise-free Optimization in Early Training Steps for Image Super-Resolution," we address the limitations of existing learning methodologies and Knowledge Distillation in image super-resolution problems. Specifically, one high-resolution image was separated and analyzed into two key elements, the optimal centroid and latent noise. Through this, we have confirmed that latent noise in the learning data induces instability in early learning. To address this issue, we proposed a more stable learning technique by eliminating latent noise in the learning process using Mixup technology and pre-trained networks. We have confirmed that the proposed technique brings consistent performance improvement across multiple models in the Fidelity-oriented single image super-resolution field. In the paper "VLC Counters: Text-aware Visual Representation for Zero-Shot Object Counting", we address the problem of counting the number of objects designated as text in images. This paper raised the issue of the two-stage method in previous studies with massive computational volumes and the possibility of error propagation. To solve the preceding problem, we propose one-stage baseline, VLBase, and extended VLC Counters with three main technologies. First, instead of relearning CLIP, a pre-trained giant model, we introduced Visual Prompt Tuning (VPT). Additionally, textual information is added to the learnable token of VPT so that the corresponding object gets highlighted image features. Second, fine-tuning has been made to obtain a similarity map that emphasizes only the important part of the object area, not the whole. This allows the model to increase object-centric activation. Third, in order to improve the generalization ability of the model and to accurately locate the object, we integrate the image encoder feature into the decoding and multiply the previous similarity map by the feature to focus on the object area. Not only does the proposed technique significantly exceed the performance of existing methods, but it also doubles the learning and inference speed with a lightweight model. The "Task-disruptive Background Suppression for Few-shot Segmentation" paper addresses how to efficiently address the background of Support in a new-shot segmentation problem, which refers to a small number of images (Support) and masks to find objects in new images (Query).
-
- 작성일 2023-12-26
- 조회수 3415
-
- [Research] [Research] Professor Hongwook Woo's lab (CSI lab), AAAI 2024 paper was approved by publication (3 papers)
- Three papers from the CSI Research Institute (Director: Woo Hong-wook) have been accepted by The 38th Annual AAAI Conference on Artificial Intelligence (AAAI 2024). The paper will be presented in Vancouver, Canada, in February 24. 1. The paper "SemTra: A Semantic Skill Translator for Cross-Domain Zero-shot Policy Adaptation" was written by Shin Sang-woo (M.D.), Yoo Min-jong (PhD), and Lee Jung-woo (undergraduate program). This study is about Zero-Shot adaptation technology, which enables embedded agents such as robots to quickly respond to changes in their surroundings without further learning, and we present a SemTra (SemTra) framework that transforms multimodal data such as vision, sensor, and user commands into semantically interpretable skills (Sematic Skill), optimizing these skills to the environment and executing them in successive actions. SemTra has shown high performance by being tested in an autonomous environment with robots such as Meta-World, Franka Kitchen, RLBench, and CARLA, as a result of a study that transforms implicit behavior patterns into actionable skills (Skill, Continuous Behavior patterns) through pre-trained language models. 2. The paper "Risk-Conditioned Reinforcement Learning: A Generalized Approach for Adaptation to Varying Risk Measures" was written by Yoo Kwang-pyo (doctoral program) and Park Jin-woo (master's program) researchers from the Department of Software. This study proposes Risk Conditional Reinforcement Learning (Risk Conditional Reinforcement Learning), which can be used in applications that require significant decision-making involving risks, such as finance, robots, and autonomous driving. In particular, to cope with various dynamically changing preference risk levels through one learned reinforcement learning model, we implement a structure of a weighted value-at-risk (WV@R) based reinforcement learning model that enables a single representation of heterogeneous risk metrics for the first time, enabling flexible processing of reinforcement learning-based decision-making in a number of risk management-focused applications. 3. The paper "Robust Policy Learning via Offline Skill Diffusion" was written by Kim Woo-kyung (doctoral) and Yoo Min-jong (doctoral) researchers in the software department. In this work, we present DuSkill (Offline Skill Diffusion Model), a new offline learning framework that uses the Diffusion model to create a variety of embodied agent skills that are extended from the finite skills of the dataset. The DuSkill framework enhances the diversity of offline learned skills, accelerating the RL Policy Learning process for multi-tasks and heterogeneous environmental domains, and improving the robustness of the learned policies. The CSI lab is conducting research on network and cloud system optimization and embedded agents such as robot and drone autonomous driving using machine learning, reinforcement learning, and self-directed learning. The research in this AAAI 2024 paper is underway with the support of the People-Centered Artificial Intelligence Core Source Technology Project (IITP), the Korea Research Foundation's Personal Basic Project (NRF), and the Graduate School of Artificial Intelligence. 우홍욱 | hwoo@skku.edu | CSI Lab | https://sites.google.com/view/csi-agent-group
-
- 작성일 2023-12-26
- 조회수 2467
-
- [Research] A paper from Prof. Jinkyu Lee’s Lab. (RTCL@SKKU) published in IEEE RTSS 2023
- A paper from RTCL@SKKU (Advisor: Jinkyu Lee) has been published in IEEE RTSS 2023. IEEE RTSS is the premier conference in real-time systems, in which around 30 papers are usually published every year. In this year, IEEE RTSS 2023 was held in Taipei, Taiwan. IEEE RTSS 2023 Website http://2023.rtss.org/ Real-Time Computing Lab. Website https://rtclskku.github.io/website/ - (Paper Title) RT-Blockchain: Achieving Time-Predictable Transactions - (Abstract) Although blockchain technology is being increasingly utilized across various fields, the challenge of providing timing guarantees for transactions remains unmet, which is an obstacle in implementing blockchain solutions for time-sensitive applications such as high-frequency trading and real-time payments. In this paper, we propose the first solution to achieve a timing guarantee on blockchain. To this end, we raise and address two issues for timely transactions on a blockchain: (a) architectural support, and (b) real-time scheduling principles spe- cialized for blockchain. For (a), we modify an existing blockchain network, offering an interface to preferentially select the transactions with the earliest deadlines. We then extend the blockchain network to provide the flexibility of the number of generated blocks at a single block time. Under such architectural supports, we achieve (b) with three steps. First, to resolve a discrepancy between a periodic request of a transaction-generating node and the corresponding arrival on a block-generating node, we translate the former into the latter, which eases the modeling of the transaction load imposed on the blockchain network. Second, we derive a schedulability condition of the modeled transaction load, which guarantees no missed deadline for all transactions under a work-conserving deadline-based scheduling policy. Last, we develop a lazy scheduling policy and its condition, which reduces the number of generated blocks without compromising the degree of timing guarantees for the work-conserving policy. By implementing RT-blockchain on top of an existing open- source blockchain project, we demonstrate the effectiveness of the proposed scheduling principles with architectural supports in not only ensuring timely transactions but also reducing the number of generating blocks. Jinkyu Lee | jinkyu.lee@skku.edu | RTCL@SKKU | https://rtclskku.github.io/website/
-
- 작성일 2023-12-11
- 조회수 2763
-
- [Student] [23.11.29] A space of "SKKU-like" where history, tradition, creation and innovation are combined, Sungkyunkwan Universit
- Sungkyunkwan University (President Yoo Ji-beom) announced on the 28th that it held the unveiling and opening ceremony of the Software Complex (Soft ːcom) following the relocation of the SW-centered university project group (Director Lee Eun-seok, head of the software convergence university) at the Samsung Library at the Natural Science Campus. Sof: Comzone, where the opening ceremony was held, is a state-of-the-art space for SW and AI manpower training that encompasses Sungkyunkwan University's natural science campus and humanities and social science campus. It consists of a total of 1,206.4 square meters of space, including a seminar room for 仁 ː and a room dedicated to the Sof ː 致知 program to foster students' R&D skills. The opening ceremony was attended by Yoo Ji-beom, Vice-President Kim Jae-hyun, Vice-President Choi Jae-poong, university school affairs committee members (department heads and department heads), and related departments in the school, including the unveiling of the Sof ːcom signboard, opening ceremony, and space introduction. President Yoo Ji-beom said, "Sog ːcom is truly happy to open Sog ːcom as a space that shows "SKKU-like" that combines creative and future-oriented collaboration and innovation strategies on history and tradition. We will strive to lay the foundation for a respected global university that Sungkyunkwan University aims for through complex thinking and self-directed learning through learning that combines various fields including SW and AI." "Sof ːcom is a complex space where you can see the results of step-by-step programs for basic SW education and talent development and industry-academic cooperation in the SW-centered university project in one place, and it can constantly create new values by converging various cultures and information between interdisciplinary and interdisciplinary areas," said Lee Eun-seok, head of the group. "We will do our best to become a university that practices sharing that enables social contribution through true education for students and disclosure and sharing of results." Sungkyunkwan University SW-centered University Project Group Sof: Commemoration Ceremony for the unveiling and opening of the signboard. Photo = Courtesy of Sungkyunkwan University Sungkyunkwan University SW-centered University Project Group Sof: Commemoration Ceremony for the unveiling and opening of the signboard. Photo = Courtesy of Sungkyunkwan University Sungkyunkwan University SW-centered University Project Group Sof: Commemoration Ceremony for the unveiling and opening of the signboard. Photo = Courtesy of Sungkyunkwan University The introduction by space of Sof ːcom was conducted by students such as ▲ self-driving car pilot driving and algorithm (S-car Han Sang-hoon, Kang Min-seok), ▲ Google Study Workshop and Skillup Pass (GDSC Lee Kun-hee), ▲ Online Myeongryundang, King GoCoin, Open Source Data Management System (SSA Ryu Soo-hyun, Yoo Hye-jin, Yoo Kwang-ho), ▲ realistic image content and game production (MAV Yoo Ji-hoon), and ▲Ars-Eletronica entry (Cultec Park Ji-hye). Members of Sungkyunkwan University's SW-centered University Project Group Soap: Commzone Commemorative Photography [Photo provided by Sungkyunkwan University] The Sopfːcom Media Gallery, which received the most attention, consisted of realistic image content works - GLOBAL SKKU, famous paintings, Convergence-, Ars-Eletronica student works, and real-time natural science campus videos. Most of the contents were joined by Sanghwa Co., Ltd. and Prime Panorama, which have MOUs with the SW-centered university business group, and the industry's latest trend advice and comparison programs on students' realistic image contents, metaverse, AR, and VR will continue to be promoted next year. The space of Sungkyunkwan University's SW-centered university project group was established as a first-generation business space on the sixth floor of the semiconductor hall (SOScom) after the selection of the first stage of the SW-centered university project in ▲ 2015, and was created as a remodeling space for 1.5 generations of Sof ːcom in 2022 after the second stage of the project in ▲ 2021. Later, on the second floor of the Samsung Academic Information Center, a landmark of Sungkyunkwan University's Natural Science Campus, it was established as a second-generation business space as ▲ 2023 as a combination of the SW-centered university project of the Ministry of Science and ICT-IITP and the university innovation support project of the Ministry of Education to strengthen the qualitative innovation of university education and student support for future talent development. Sungkyunkwan University SW-centered University Project Group Sof: Check the details of the major space VR and metaport in Comzone (https://skb.skku.edu/swuniv/VR.do)
-
- 작성일 2023-12-04
- 조회수 1856
-
- [Faculty] Data Intelligence and Learning Lab (supervisor: Lee Jong-wook), SIGIR, CIKM, EMNLP 2023 Papers Published 7 Papers
- The Data Intelligence and Learning (DIAL) lab has finally approved three papers for publication at SIGIR 2023, the world's most prestigious information retrieval society, and published them in Madrid, Spain, on July 23. On October 21, a total of two papers were finally approved for publication in CIKM 2023, the world's most prestigious data mining conference, and the paper was published in Birmingham, England. In addition, a total of two papers have been approved for publication in EMNLP 2023, the world's most prestigious natural language processing society, and they will present them in Singapore in December. [List of papers] 1. 1. It’s Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation (SIGIR'23) 2. 2. uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering (SIGIR'23) 3. 3. ConQueR: Contextualized Query Reduction using Search Logs (SIGIR'23) 4. Forgetting-aware Linear Bias for Attentive Knowledge Tracing (CIKM'23) 5. 5. Toward a Better Understanding of Loss Functions for Collaborative Filtering (CIKM'23) 6. 6. GLEN: Generative Retrieval via Lexical Index Learning (EMNLP'23) 7. 7. It Ain't Over: A Multi-aspect Diverse Math Word Problem Dataset (EMNLP'23) 연구 1: Jaewan Moon, Hye-young Kim, and Jongwuk Lee, “It’s Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation”, 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2023 This study conducts a theoretical analysis of diagonal constraints in a linear autoencoder-based recommendation system, We propose relaxed linear autocoders (RLAE) that relax diagonal constraints. The linear autoencoder model learns the weight matrix between items through L2 normalization and convex optimization with zero-diagonal constraints. This paper aims to theoretically understand the characteristics of two constraints in a linear autoencoder model. Analysis of the weight matrix using singular value decomposition (SVD) and principal component analysis (PCA) reveals that L2 normalization promotes the effectiveness of high-ranking principal components. On the other hand, we have shown that diagonal component removal constraints can reduce the effects of low-ranking principal components, leading to performance degradation of unpopular items. Inspired by these analysis results, we propose a simple yet effective linear autoencoder model, the Relaxed Denoising Linear AutoEncoder (RLAE) and the Relaxed Denoising Linear AutoEncoder (RDLAE), using diagonal inequality constraints. In addition, the proposed method of adjusting the degree of diagonal constraints provides proof that it is a generalized form of the existing linear model. Experimental results show that our model is similar or better than state-of-the-art linear and nonlinear models on six benchmark datasets. This supports theoretical insights into diagonal constraints, and we find significant performance improvements, especially in unbiased evaluation, which eliminates popularity and popularity bias. If you would like to know more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/sigir2023_itsenough 연구 2: Jae-woong Lee, Seongmin Park, Mincheol Yoon, and Jongwuk Lee, “uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering”, 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR, short paper), 2023 This study solves the problem that when learning using implicit feedback (e.g., click, etc.) in a recommendation system, implicit feedback is mainly biased toward popular users and items, so that the representation of users and items is learned differently from the preferences of real users and items. In this study, we point out that studies removing bias from conventional recommendations do not (i) consider the widely used contrast loss function for better representation learning, and (ii) when removing bias, we complement this. In this study, we found Unbiased Conservative Representation Learning (uCTRL) First, inspired by DirectAU, a model using contrast loss functions in conventional recommendation systems, we present contrasting representation learning as two loss functions: alignment and uniformity. The alignment function makes the representations of users and items similar for user-item interactions. The uniformity function represents the distribution of each user and item equally. We find that the alignment function is biased toward the popularity of users and items, and after estimating the bias, we use it to remove the bias using inverse propensity weighting (IPW). Additionally, we have developed a new method of estimating bias by considering both users and items used in IPW. Our experimental results have shown that the proposed uCTRL outperforms the state-of-the-art deflection models on four benchmark datasets (MovieLens 1M, Gowalla, Yelp, and Yahoo! R3). If you would like to learn more about this paper, please refer to this address. URL: https://dial.skku.edu/blog/2023_uctrl 연구 3: Hye-young Kim*, Minjin Choi*, Sunkyung Lee, Eunseong Choi, Young-In Song, and Jongwuk Lee, “ConQueR: Contextualized Query Reduction using Search Logs”, 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR, short paper), 2023This study proposes a query abbreviation model using a pre-trained language model. A query abbreviation refers to a method to find the desired search result by removing unnecessary words from the query when a user enters too long a query (search word) to obtain the appropriate result to suit his or her intention. The proposed model ConQueR solves this problem from two perspectives: (i) extracting key terms and (ii) selecting sub-queries. The key term extraction method extracts key terms of an existing query at the word level, and the sub-query selection method determines at the sentence level whether the given sub-query is the correct abbreviation of the existing query. Since the two perspectives worked at different levels and had a complementary relationship, the proposed model ConQueR finally combines them to obtain the correct abbreviation. In addition, we designed to facilitate learning by introducing truncated loss learning method to handle erroneous samples that may occur frequently in search logs. Through performance experiments and satisfaction surveys on search log data collected by real search engines, we demonstrate that the proposed model effectively performed query abbreviation. If you would like to know more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/2023_conquer 연구 4: Yoonjin Im*, Eunseong Choi*, Heejin Kook, and Jongwuk Lee, “Forgetting-aware Linear Bias for Attentive Knowledge Tracing”, The 32nd ACM International Conference on Information and Knowledge Management (CIKM, short paper), 2023Knowledge tracking models proficiency through tasks that predict correct and incorrect answers to new target problems based on learners' sequential past problem solving records. In order to accurately predict learners' proficiency, it is important to learn correlations between problems and learners' characteristics (e.g., forgetfulness behavior). Therefore, some concentration mechanism-based knowledge tracking models have modeled learners' forgetting behavior by introducing relative time interval bias instead of absolute positional embedding. This implements forgetting behavior by reducing the model's concentration as the older problem solving records at the present time point. However, existing methodologies show that the longer the problem solving record, the less effective the forgetting behavior modeling is. In this study, we determine that correlations between problems are unnecessarily involved in the calculation of conventional relative time interval biases through generalized formula analysis, and to address this, we propose Forgetting aware Linear Bias for Attentive Knowledge Tracking (FoLiBi) based on linear biases that can separate each other. The proposed methodology can easily be applied to an existing centralized mechanism-based knowledge tracking model, and despite being a simple method, we consistently improve AUC by up to 2.58% over the state-of-the-art knowledge tracking model on four benchmark datasets. To learn more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/2023_folibi 연구 5: Seongmin Park, Mincheol Yoon, Jae-woong Lee, Hogun Park, and Jongwuk Lee, “Toward a Better Understanding of Loss Functions for Collaborative Filtering”, The 32nd ACM International Conference on Information and Knowledge Management (CIKM), 2023 This study analyzes the formal relationship between various loss functions used in collaborative filtering, an axis of the recommendation system, and proposes a new loss function based on this relationship. Collaborative filtering is a key technology in the latest recommendation systems, and the learning process of collaborative filtering models typically consists of three components: interactive encoders, loss functions, and negative sampling. Although many previous studies have proposed a variety of collaborative filtering models to design sophisticated interaction encoders, recent studies show that a large performance improvement can be achieved simply by replacing the loss function. In this paper, we analyze the relationship between the existing loss functions and find that the existing loss functions can be interpreted as alignment and uniformity. (i) Alignment matches the user and item representation, and (ii) uniformity serves to distribute the user and item distribution. Inspired by this analysis, we propose a new Margin-aware Alignment and Weighted Uniformity (MAWU) that improves the design of alignment and uniformity by taking into account the unique patterns of the dataset. (i) Margin-aware Alignment (MA) mitigates user/item popularity bias, and (ii) Weighted Uniformity (WU) adjusts user and item uniformity to reflect the unique characteristics of the dataset. Experiments show that MF and LightGCN with MAWU are similar to or better than state-of-the-art collaborative filtering models using various loss functions on three benchmark datasets. If you would like to know more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/2023_mawu 연구 6: Sunkyung Lee*, Minjin Choi*, Jongwuk Lee (* : equal contribution), “GLEN: Generative Retrieval via Lexical Index Learning”, The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023 (To appear) This study proposes a new generative retrieval model via LExical Index Learning (GLEN) through vocabulary index learning. Generative retrieval is a new paradigm in document retrieval that aims to generate the identifier of the relevant document directly for the query. However, existing generative search studies have two major limitations. The first is that the identifier generation in the document differs significantly from conventional natural language generation but does not take this into account. The second is a learning-inference mismatch caused by the need to rank similar documents in reasoning, although only the generation of identifiers in learning is focused. To overcome this, this study proposes a new generative search methodology that dynamically learns lexical indexes. The proposed methodology takes additional pre-learning steps to (i) generate a keyword-based fixed document identifier through a two-phase index learning strategy (two-phase flexible index learning), and (ii) to learn the dynamic document identifier through the relevance between queries and documents. Experiments demonstrate that the proposed model GLEN achieves the best or competitive performance over traditional generated or traditional search models on various benchmark datasets such as NQ320k, MS MARCO, and BEIR. The code can be found at https://github.com/skleee/GLEN . If you would like to know more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/2023_glen 연구 7: Jiwoo Kim, Youngbin Kim, Ilwoong Baek, JinYeong Bak, Jongwuk Lee, “It Ain't Over: A Multi-aspect Diverse Math Word Problem Dataset”, The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023 (To appear) This study analyzes the mathematical reasoning ability of the Large Language Model (LLM), To improve this, we propose a new 10K dataset called Diverse Math Word Problems (DMath). Math Word Problem (MWP) tasks are complex and interesting tasks that require natural language models to have a deep understanding of natural language sentences and logical reasoning, and have been mainly used to evaluate natural language models' reasoning abilities. In recent years, the advent of the Large Language Model (LLM) has led to high performance on existing mathematical sentence problem benchmarks, and through this, LLM is known to have good mathematical reasoning ability. However, this is a result of limited benchmarks, and this paper points out the low diversity of existing benchmarks and shows that they should be increased. This paper defines a total of four diversity that a mathematical sentence problem dataset should have. These are problem types, logical usage patterns, languages, and intermediate solution forms. To define the type of reasoning, this study referred to the mathematics curriculum in the United States and Korea and defined it as arithmetic calculation, comparison, correlation, geometry, and possibility. Because previous studies have focused on arithmetic operations, little is known about what LLM produces for other types of mathematical reasoning capabilities. Experiments in this study show that LLM's reasoning ability varies greatly depending on the type of reasoning. We also pursued high diversity in vocabulary usage patterns, language, and intermediate solution processes, and these characteristics make DMath a more challenging dataset than previous studies. In addition, 43 people participated in the process of organizing and building the data, and they pursued high quality through sophisticated verification. Due to its high diversity, DMath can help examine and evaluate the different reasoning abilities of LLMs. Related data can be found at https://github.com/JiwooKimAR/dmath . If you would like to know more about this paper, please refer to the following address. URL: https://dial.skku.edu/blog/2023_dmath
-
- 작성일 2023-12-04
- 조회수 1850