MakerGram Logo

    MakerGram

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags
    • Users
    • Groups

    How to capture audio using PDM mic and send that data to central device using BLE in seeedstudio Axio nrf52840 sense with inbuilt BLE and tinyml functionality

    TinyML
    2
    3
    317
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      amalkrishnam3 last edited by

      I have been working on a project . In which I want to capture data using the PDM mic and trasmit it through the BLE available . I am new to hardware and these sensors so I am unable to make sense of the data I am reciving on the other side(central) that is my mobile device(iphone X) with BlueLight application. I am able to find a service and I am reciving some form of hex code,but I don't know how to convert it to wave files or don't know whether it's possible or not .

      If you guys have any idea or any resources that could help please do share.

      I am attaching my code below :

      #include <ArduinoBLE.h>
      #include <mic.h> //  microphone library 
      
      // Microphone Settings
      #define DEBUG 1
      #define SAMPLES 800
      
      mic_config_t mic_config = {
        .channel_cnt = 1,
        .sampling_rate = 16000,
        .buf_size = 1600,
        .debug_pin = LED_BUILTIN
      };
      
      NRF52840_ADC_Class Mic(&mic_config);
      int16_t recording_buf[SAMPLES];
      volatile static bool record_ready = false;
      
      // Updated UUIDs
      #define SERVICE_UUID "19B10000-E8F2-537E-4F6C-D104768A1214"
      #define CHARACTERISTIC_UUID_AUDIO "19B10001-E8F2-537E-4F6C-D104768A1214"
      
      // BLE Service and Characteristic
      BLEService audioService(SERVICE_UUID);
      // Corrected initialization with explicit value size and fixed length flag
      BLECharacteristic audioDataCharacteristic(CHARACTERISTIC_UUID_AUDIO, BLERead | BLENotify | BLEWrite, sizeof(recording_buf), true);
      
      void setup() {
        Serial.begin(115200);
        while (!Serial) delay(10);
      
        Serial.println("Initializing microphone...");
        Mic.set_callback(audio_rec_callback);
        if (!Mic.begin()) {
          Serial.println("Mic initialization failed");
          while (1);
        }
        Serial.println("Mic initialized.");
      
        Serial.println("Initializing BLE...");
        if (!BLE.begin()) {
          Serial.println("Failed to start BLE!");
          while (1);
        }
      
        BLE.setLocalName("SCT Audio");
        BLE.setAdvertisedService(audioService);
        
        audioService.addCharacteristic(audioDataCharacteristic);
        BLE.addService(audioService);
      
        // Corrected writeValue call with explicit casting
        audioDataCharacteristic.writeValue((uint8_t)0);
        
        BLE.advertise();
        Serial.println("BLE Peripheral is now advertising");
      }
      
      void loop() {
        BLEDevice central = BLE.central();
      
        if (central) {
          Serial.println("Connected to central device");
          
          while (central.connected()) {
            if (record_ready) {
              // Plot the audio data in the Serial Plotter
              for (int i = 0; i < SAMPLES; i++) {
                Serial.println(recording_buf[i]);
              }
      
              // Transmit the audio data
              audioDataCharacteristic.writeValue((uint8_t*)recording_buf, 2 * SAMPLES);
              Serial.println("Audio data transmitted over BLE");
              record_ready = false;
            }
          }
      
          Serial.println("Disconnected from central device");
        }
      }
      
      static void audio_rec_callback(uint16_t *buf, uint32_t buf_len) {
        static uint32_t idx = 0;
        
        for (uint32_t i = 0; i < buf_len; i++) {
          recording_buf[idx++] = buf[i];
          if (idx >= SAMPLES){ 
            idx = 0;
            record_ready = true;
            break;
          } 
        }
      }
      
      
      salmanfaris 1 Reply Last reply Reply Quote 1
      • salmanfaris
        salmanfaris @amalkrishnam3 last edited by

        Hi @amalkrishnam3, Were you able to make any progress?

        A 1 Reply Last reply Reply Quote 0
        • A
          amalkrishnam3 @salmanfaris last edited by

          @salmanfaris No, I haven't made any progess on the matter

          1 Reply Last reply Reply Quote 0
          • First post
            Last post

          Recent Posts

          • Hi makers

            i have choosen project theme.

            project title : Still Heard

            “For the ones who stayed quiet but always cared.”

            Still Heard is an interactive emotional companion built for those who’ve ever felt unheard, unseen, or silenced especially in professional or personal spaces.

            looked for some option.
            need to take the input from the user and we have two options
            direct text from user
            or
            capturing emotion using sensors
            i have read about Grove-GSR sensor
            can we use that one?
            Is there any better options ?

            • read more
          • Hi @rahuljeyaraj ,

            Welcome to the MakerGram community forum, There are several capable development boards available right now for the application. I'm sharing the one we have at our MakerGram inventory,

            For small models, I recommend using - XIAO ESP32-S3 Sense along with Edge Impulse platform to create the model, Here the processing is managed by the esp32 Core itself. It maybe feel slower if the mode it bit heavy.

            next option for small models is Seeed Grove Vision v2 camera where all the processing will done in the inbuilt "Arm Cortex-M55 & Ethos-U55" and you can get the inference results via the I2C or UART from the board, later you can process with Arduino or any XIAO board itself.

            If the model is moderate in size, better to go with Raspberry Pi and v2 or v3 camera and use the Edge Impulse models.

            Currently, we have these board available.

            • read more
          • R

            Hi everyone,

            I’m planning a vision-based AI project where the AI model should run directly on the embedded board (not in the cloud). Based on the inference results, I want to trigger actions like controlling a light or playing a pre-recorded audio file through a speaker.

            Could you please suggest which board or boards would be suitable for this kind of edge AI application? Also, does MakerGram stock such boards?

            Thanks in advance for your help!

            • read more
          • @salmanfaris Thank you Salman, This solution worked for me.

            • read more
          • @Abhay Can you try one more method

            Downloaded both boards, “Seeed nRF52 Boards” and “Seeed nRF52 mbed-enabled Boards.”

            Copy the adafruit-nrfutil from

            /Users/<user>/Library/Arduino15/packages/Seeeduino/hardware/mbed/2.9.2/tools/adafruit-nrfutil/macos/adafruit-nrfutil

            to

            /Users/<user>/Library/Arduino15/packages/Seeeduino/hardware/nrf52/1.1.8/tools/adafruit-nrfutil/macos/adafruit-nrfutil

            Then chmod -x the copied file

            Please check and let me know. Src

            • read more
          By MakerGram | A XiStart Initiative | Built with ♥ NodeBB
          Copyright © 2023 MakerGram, All rights reserved.
          Privacy Policy | Terms & Conditions | Disclaimer | Code of Conduct