The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    Method identified to double computer processing speeds

    22 February 2024No Comments2 Mins Read

    [ad_1]

    Imagine doubling the processing power of your smartphone, tablet, personal computer, or server using the existing hardware already in these devices.

    Hung-Wei Tseng, a UC Riverside associate professor of electrical and computer engineering, has laid out a paradigm shift in computer architecture to do just that in a recent paper titled, “Simultaneous and Heterogeneous Multithreading.”

    Tseng explained that today’s computer devices increasingly have graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately, moving information from one processing unit to the next, which in effect creates a bottleneck.

    In their paper, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce what they call “simultaneous and heterogeneous multithreading” or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator.

    The system achieved a 1.96 times speedup and a 51% reduction in energy consumption.

    “You don’t have to add new processors because you already have them,” Tseng said.

    The implications are huge.

    Simultaneous use of existing processing components could reduce computer hardware costs while also reducing carbon emissions from the energy produced to keep servers running in warehouse-size data processing centers. It also could reduce the need for scarce freshwater used to keep servers cool.

    Tseng’s paper, however, cautions that further investigation is needed to answer several questions about system implementation, hardware support, code optimization, and what kind of applications stand to benefit the most, among other issues.

    The paper was presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture held in October in Toronto, Canada. The paper garnered recognition from Tseng’s professional peers in the Institute of Electrical and Electronics Engineers, or IEEE, who selected it as one of 12 papers included in the group’s “Top Picks from the Computer Architecture Conferences” issue to be published this coming summer.

    [ad_2]

    Source link

    Previous ArticleGoogle DeepMind jumps back into open source AI race with new model Gemma
    Next Article Innovations in depth from focus/defocus pave the way to more capable computer vision systems
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.