跳轉到內容

Aros/Developer/AHIDrivers

來自 Wikibooks,開放世界中的開放書籍
Aros 維基百科的導航欄
Aros 使用者
Aros 使用者文件
Aros 使用者常見問題解答
Aros 使用者應用程式
Aros 使用者 DOS Shell
Aros/User/AmigaLegacy
Aros 開發文件
Aros 開發者文件
從 AmigaOS/SDL 移植軟體
Zune 初學者指南
Zune .MUI 類
SDL 初學者指南
Aros 開發者構建系統
特定平臺
Aros x86 完整系統 HCL
Aros x86 音訊/影片支援
Aros x86 網路支援
Aros Intel AMD x86 安裝
Aros 儲存支援 IDE SATA 等
Aros Poseidon USB 支援
x86-64 支援
摩托羅拉 68k Amiga 支援
Linux 和 FreeBSD 支援
Windows Mingw 和 MacOSX 支援
Android 支援
Arm Raspberry Pi 支援
PPC Power Architecture
雜項
Aros 公共許可證

可重定向音訊裝置

[編輯 | 編輯原始碼]

對於其他音效卡,開發了一個名為 AHI 的系統來支援除 Paula(Amiga(TM))之外的其他音效卡。AHI 使用 ahi.device 和額外的驅動程式來支援不同的音效卡,這些音效卡在 AHI 偏好設定(在 Prefs 抽屜中)中設定。它的程式設計方式與舊的 Amiga audio.device 類似。更多資訊包含在 AHI 開發者檔案中,您可以從 AHI 主頁 或 Aminet 下載。

維基百科頁面

Amiga Sourceforge DevHelp

裝置

單元 0-3 可以被您為它們定義的任意數量的程式共享。音樂單元會獨佔地阻止它所設定的硬體,因此沒有其他程式可以在同一時間透過此硬體播放聲音。這就是 <ahi-device>.audio 被髮明的原因,它是一個虛擬硬體,它將它的聲音資料傳送到您設定它的單元。這樣,雖然音樂單元下拉選項獨佔地阻止了 <ahi-device>.audio,但其他程式仍然可以將聲音傳送到單元 0-3。通常所有程式都使用單元 0。只有極少數程式使用音樂單元。

它以兩種模式工作,作為裝置驅動程式(高階)或庫(低階)。雖然這最初讓我困惑,但當我程式設計了裝置模式之後,它是因為這只是簡單地允許您傳送聲音流,而庫模式處理與已預載入的跟蹤器一起使用的樣本。

庫方法

[編輯 | 編輯原始碼]

庫(低階)方法使用 AHI 函式,如 AHI_SetSound()、AHI_SetVol() 等等。實際上,這種方法有一個很大的問題:如果您使用這種方法,您將沒有任何混合功能 - 您的程式將鎖定 ahi.device,並且在您的程式執行期間,所有其他 ahi 程式將無法工作。如文件中所述,低階編碼的唯一優點是“低開銷和對播放聲音的更高階控制”。

在這裡,您可以獨佔地訪問音訊硬體,並且可以做幾乎任何您想做的事情,包括監控。

缺點顯而易見,音訊硬體對所有其他程式都不可用。大多數驅動程式並不關心這種情況,如果一個程式嘗試訪問硬體,它通常會破壞一切,您需要重新啟動程式或重新分配音訊硬體。

從 AHI6 開始,有一種名為“裝置模式”的非阻塞 AHI 模式,但它不允許錄音。僅限播放,並且定時很差。播放某些內容足夠好,但對即時響應來說太糟糕了。

要使用兩個樣本,您需要使用庫模式,您需要開啟 ahi.device 並從 AHI 裝置結構中提取庫基址。

裝置方法

[編輯 | 編輯原始碼]

透過這種方式,您只需像使用標準的 Amiga 裝置一樣使用 ahi.device,並使用 CMD_WRITE 將原始資料傳送到 AHI。使用高階 AHI 編碼,您可以混合聲音,不再存在 ahi.device 的“鎖定”等等。例如,對於 MP3 播放或模組播放器,您只需解壓縮 CPU 需要的資料,並使用此解壓縮的原始資料與 CMD_WRITE 一起使用。

裝置介面的程式設計要容易得多,並且適合系統噪聲或 MP3 播放器。它有一個 20 毫秒的固定延遲,這在大多數(非音樂)情況下已經足夠了。它是非阻塞硬體的,因此在快速播放音訊時它是首選。

它還支援 CMD_READ 訊息,但

  • 只要您讀取,它就會獨佔地阻止 AHI。
  • 有時它會在透過裝置介面錄製時發出奇怪的咔嗒聲

您所做的只是使用 CMD_WRITE 命令,在 AHI IORequest 結構中設定樣本資訊,對於更多樣本,您將需要複製 IORequest 並將其用於其他樣本,等等。本質上,該結構作為訊息傳送到 AHI 守護程式,這是 Exec 裝置的標準做法,因此這就是它需要副本的原因。否則,它會嘗試將訊息連結到列表中,而該列表已經存在於同一個地址,然後崩潰!

我會從裝置 API 開始,尤其是因為它非常簡單。當您載入/生成樣本資料、開啟 AHI 裝置並分配 IORequest(s) 後,您可以使用 Exec 庫函式(DoIO、SendIO、BeginIO...)播放樣本。但是,AHI 通道的數量可能有限,因此 IIRC 在這種情況下,優先順序較低的音訊將會排隊並稍後播放。您可以建立自己的混合器例程,它基本上使用雙緩衝 IO 請求“流式傳輸”資料(AHI SDK 中有一個關於雙緩衝的示例)。

我真的需要透過 ahi.device 錄製嗎?如果只是監控功能,您可以使用 ahi 的內部監控功能(它具有最低可能的延遲,並且可能會使用硬體可能性來監控),或者您可以使用 lib 介面讀取、操作和複製您的資料到 outbuffer。延遲通常為 20 毫秒,具體取決於驅動程式,應用程式無法控制它。

您也可以使用 datatypes.library 來播放樣本,但無法說它在定時方面是否非常精確,但它至少非常易於使用。

CreateMsgPort
CreateIORequest
OpenDevice (ahi.device)

loop
    {
    depack some music data
    fill AHIdatas
    SendIO((struct IORequest *) AHIdatas);
    }

然後,當我需要時,我只是為聲音執行此操作(它將在第二個通道上播放(這是使其透過 ahi 同時工作的唯一解決方案)

CreateMsgPort
CreateIORequest
OpenDevice (ahi.device)

fill AHIdatas

DoIO/sendio

找到預設音訊 ID,例如單元 0 或預設單元。然後呼叫 AHI_AllocAudioA() 並向它傳遞 ID 或 AHI_DEFALUT_ID,以及帶有您需要的最小通道數量的 AHIA_Channels 標籤。然後檢查它是否分配了通道。如果是,您應該知道它有足夠的通道,然後您可以呼叫 AHI_FreeAudio()。如果不是,這意味著它沒有足夠的通道,前提是您已傳遞所有必需的標籤。

AHI 裝置介面播放的是流,而不是樣本。AHI 會根據您在偏好設定中設定的通道數量將盡可能多的流混合在一起。如果您嘗試播放的流數量超過可用的通道數量,則額外的流將被靜音。

如果您需要同步兩個樣本(可能是為了立體聲),那麼您可以發出 CMD_STOP 命令,執行您的 CMD_WRITE 命令,然後發出 CMD_START 命令來開始播放。您需要注意的是,它會影響所有 AHI 應用程式,而不僅僅是您自己的應用程式。

這讓我想到另一個問題。您的聲音是單聲道還是立體聲?正如您所讀到的,立體聲的正確方法是告訴 AHI 將聲像居中並提供一個立體聲樣本。我不知道如果它無法做到這一點它是否會返回錯誤,它可能會接受寫入但會靜音一個通道,就像您發現的那樣。

關於從不同 AHI 請求發出多個 CMD_WRITE 的另一件事。AHI 會分別處理每個例項,並將聲音在同一條軌道上混合在一起。只要硬體支援,高階 API 只能提供聲像定位,而不能指定直接軌道,AFAIK。

http://utilitybase.com/forum/index.php?action=vthread&forum=201&topic=1565&page=-1

如果您想透過裝置 API 使用一個通道播放多個樣本,您必須自己從樣本中建立一個流。

AHI API 使用 OpenDevice 來執行 CMD_READ、CMD_WRITE、CMD_START、CMD_STOP。

參見此執行緒

if (AHImp=CreateMsgPort()) {
  if (AHIio=(struct AHIRequest *)CreateIORequest(AHImp,sizeof(struct AHIRequest))) {
     AHIio->ahir_Version = 6;
     AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL);
  }

這將建立一個新的訊息埠,建立一些 IORequest 結構,最後開啟要寫入的 AHI 裝置。

播放聲音

[編輯 | 編輯原始碼]
// Play buffer
AHIio->ahir_Std.io_Message.mn_Node.ln_Pri = pri;
AHIio->ahir_Std.io_Command = CMD_WRITE;
AHIio->ahir_Std.io_Data = p1;
AHIio->ahir_Std.io_Length = length;
AHIio->ahir_Std.io_Offset = 0;
AHIio->ahir_Frequency = FREQUENCY;
AHIio->ahir_Type = TYPE;
AHIio->ahir_Volume = 0x10000; // Full volume
AHIio->ahir_Position = 0x8000; // Centered
AHIio->ahir_Link = link;
SendIO((struct IORequest *) AHIio);
// fill

  AHIios[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127;
  AHIios[0]->ahir_Std.io_Command  = CMD_WRITE;
  AHIios[0]->ahir_Std.io_Data     = raw_data;
  AHIios[0]->ahir_Std.io_Length   = size_of_buffer;
  AHIios[0]->ahir_Std.io_Offset   = 0;
  AHIios[0]->ahir_Frequency       = 48000;     // freq
  AHIios[0]->ahir_Type            = AHIST_S16S;// 16b
  AHIios[0]->ahir_Volume          = 0x10000;   // vol.
  AHIios[0]->ahir_Position        = 0x8000;   
  AHIios[0]->ahir_Link            = NULL;

// send

SendIO((struct IORequest *) AHIios[0]);

AHI_IORequest 結構類似於音訊裝置結構。p1 指向實際的原始聲音資料,length 是資料緩衝區的大小,Frequency 是回覆的頻率,例如 8000 赫茲,Type 是音樂資料型別,例如 AHIST_M8S,然後是揚聲器的音量和位置。SendIO 將開始播放聲音,您可以使用 WaitIO 等待緩衝區播放完畢,然後再開始播放下一塊資料。

釋放音訊

[edit | edit source]
  • 我使用 play 為 false 呼叫 AHI_ControlAudio(),以確保沒有任何聲音正在播放。
  • 我使用 AHI_UnloadSound() 解除安裝聲音,以確保聲音被解除安裝。
  • 然後我呼叫 AHI_FreeAudio()。

關閉

[edit | edit source]

完成對 AHI 裝置的操作後,您需要關閉裝置。例如:

  • 執行 CloseDevice()
  • 然後執行 DeletIORequest()
  • 最後執行 DeleteMsgPort()
if (!AHIDevice)
   CloseDevice((struct IORequest *)AHIio);
   DeleteIORequest((struct IORequest *)AHIio);
   DeleteIORequest((struct IORequest *)AHIio2);
   DeleteMsgPort(AHImp);

經常更新聲音

[edit | edit source]

檢視 simpleplay 示例。

如果您想定期“更新”您的聲音,那麼已經提供了一些功能。

在 AllocAudio() 中,您可以使用 AHIA_PlayerFunc 欄位提供一個播放器函式。

AHIA_PlayerFunc 如果您要播放樂譜,您應該使用此“中斷”源,而不是 VBLANK 或 CIA 定時器,以便在所有音訊驅動程式中獲得最佳結果。如果您不能使用此方法,則不得使用任何“非即時”模式(請參閱自動文件中的 AHI_GetAudioAttrsA(),AHIDB_Realtime 標記)。

AHIA_PlayerFreq 如果非零,它將啟用計時並指定 PlayerFunc 每秒呼叫多少次。如果 AHIA_PlayerFunc 不為零,則必須指定此值。建議您將頻率保持在 100-200 Hz 之間。由於頻率是定點數字,因此 AHIA_PlayerFreq 應小於 13107200(即 200 Hz)。

這樣就可以編寫一種重播器,例如,它可以決定需要停止哪些聲音,或例如滑動、音量調高/調低等。

讓主迴圈等待播放器“完成”。

您可以透過訊息傳遞來實現,但也可以使用訊號。為了停止播放器,您可以使用一個布林值(透過按下按鈕或您想要的任何方式設定),播放器會檢查該布林值,然後向主迴圈傳送訊號,指示其退出。

請檢視 AHI 開發者檔案中的 PlaySineEverywhere.c 示例。

雜項

[edit | edit source]

有許多事情被稱為“延遲”。我最關心的是音訊(如麥克風)到達輸入到從監視器輸出的時間差。您可以透過將具有短上升時間的東西(交叉棒聲音效果很好)放入一個通道,並將該通道的輸出連線到另一個通道的輸入來測量此時間差。在兩個通道上錄製幾秒鐘。停止錄製,放大兩個通道的波形,測量它們之間的時間差。這就是輸入/輸出延遲。

播放樣本時的延遲比較棘手,因為它取決於支援 VST 樂器的程式。如果您有一個具有聲音的 MIDI 鍵盤,您可以選擇鍵盤和 VST 庫中的類似聲音,將樣本播放通道的模擬輸出連線到一個輸入,將合成器輸出連線到另一個輸入,播放您的聲音,將其錄製到兩條軌道,檢視兩條軌道之間的時間差。這不是完全準確的,但會為您提供一個大概的測量值。

如果您按照以下步驟進行(在您的程式碼中)

filebuffer = Open("e.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length1 = Read(filebuffer,p1,BUFFERSIZE);

filebuffer = Open("a.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length2 = Read(filebuffer,p2,BUFFERSIZE);

filebuffer = Open("d.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length3 = Read(filebuffer,p3,BUFFERSIZE);

filebuffer = Open("g.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length4 = Read(filebuffer,p4,BUFFERSIZE);

filebuffer = Open("b.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length5 = Read(filebuffer,p5,BUFFERSIZE);

filebuffer = Open("ec.raw",MODE_OLDFILE);
if (filebuffer==NULL) printf("nfilebuffer NULL");
else length6 = Read(filebuffer,p6,BUFFERSIZE);

Then your variable "filebuffer" (which is a special pointer to the handle of the file) gets overwritten before the handle is closed.

red:
So i kind of expected something like:
filebuffer = Open("b.raw",MODE_OLDFILE);
if (filebuffer==NULL)
{
  printf("nfilebuffer NULL")
}
else
{
  length5 = Read(filebuffer,p5,BUFFERSIZE);
  if close(filebuffer)
  {
    printf("nfile b.raw closed successfully")
  }
  else
  {
    printf("nfile b.raw did not close properly, but we cannot use the filehandle anymore because it is not valid anymore")
  }
}

您必須解除安裝/釋放每個已使用或未使用但已分配的通道/聲音。

例如,以下程式碼將迴圈遍歷,直到最後一個已分配的通道。

For(chan_no=0;chan_no<num_of_channels;chan_no++) { If(channel[chan_no]) free(channel[chan_no]); }

也許 If(channel[chan_no]!=NULL)

為了確保,您可以在退出之前將每個聲音庫設為 NULL。

示例

[edit | edit source]

另一個 示例

儘管需要雙緩衝。

struct MsgPort    *AHIPort = NULL;
struct AHIRequest *AHIReq = NULL;
BYTE               AHIDevice = -1;
UBYTE              unit = AHI_DEFAULT_UNIT;

static int write_ahi_output (char * output_data, int output_size);
static void close_ahi_output ( void );

static int
open_ahi_output ( void ) {
    if (AHIPort = CreateMsgPort())
    {
        if (AHIReq = (struct AHIRequest *) CreateIORequest(AHIPort, sizeof(struct AHIRequest)))
        {
            AHIReq->ahir_Version = 4;
            if (!(AHIDevice = OpenDevice(AHINAME, unit, (struct IORequest *) AHIReq, NULL)))
            {
                send_output = write_ahi_output;
                close_output = close_ahi_output;
                return 0;
            }
            DeleteIORequest((struct IORequest *) AHIReq);
            AHIReq = NULL;

        }
        DeleteMsgPort(AHIPort);
        AHIPort = NULL;
    }

    return -1;
}

static int
write_ahi_output (char * output_data, int output_size) {
    if (!CheckIO((struct IORequest *) AHIReq))
    {
        WaitIO((struct IORequest *) AHIReq);
        //AbortIO((struct IORequest *) AHIReq);
    }

    AHIReq->ahir_Std.io_Command = CMD_WRITE;
    AHIReq->ahir_Std.io_Flags = 0;
    AHIReq->ahir_Std.io_Data = output_data;
    AHIReq->ahir_Std.io_Length = output_size;
    AHIReq->ahir_Std.io_Offset = 0;
    AHIReq->ahir_Frequency = rate;
    AHIReq->ahir_Type = AHIST_S16S;
    AHIReq->ahir_Volume = 0x10000;
    AHIReq->ahir_Position = 0x8000;
    AHIReq->ahir_Link = NULL;
    SendIO((struct IORequest *) AHIReq);
     
    return 0;
}

static void
close_ahi_output ( void ) {
if (!CheckIO((struct IORequest *) AHIReq)) {
    WaitIO((struct IORequest *) AHIReq);
    AbortIO((struct IORequest *) AHIReq);
}

if (AHIReq) {
    CloseDevice((struct IORequest *) AHIReq);
    AHIDevice = -1;
    DeleteIORequest((struct IORequest *) AHIReq);
    AHIReq = NULL;
}

if (AHIPort) {
    DeleteMsgPort(AHIPort);
    AHIPort = NULL;
}

}

用於聲音播放的高階 ahi - 想法是建立多個 i/o 請求,然後當您想要播放聲音時,選擇一個空閒的請求,然後使用 BeginIO() 對其啟動 CMD_WRITE,然後將 i/o 請求標記為正在使用(上述程式碼中的 ch->busy 欄位)。SoundIO() 函式的作用是檢查 ahi.device 的回覆,以檢視是否有任何 i/o 請求已完成,然後將其標記為不再使用。如果沒有空閒的 i/o 請求,PlaySnd 函式將簡單地使用 AbortIO()/WaitIO() 中斷播放時間最長的請求,然後重用該請求。


char *snd_buffer[5];
int sound_file_size[5];

int number;

struct Process *sound_player;
int sound_player_done = 0;

void load_sound(char *name, int number)
{

   FILE *fp_filename;

   if((fp_filename = fopen(name,"rb")) == NULL)
     { printf("can't open sound file\n");exit(0);} ;

   fseek (fp_filename,0,SEEK_END);
   sound_file_size[number] = ftell(fp_filename);
   fseek (fp_filename,0,SEEK_SET);

   snd_buffer[number]=(char *)malloc(sound_file_size[number]);

   fread(snd_buffer[number],sound_file_size[number],1,fp_filename);

   //printf("%d\n",sound_file_size[number]);

   fclose(fp_filename);

 //  free(snd_buffer[number]);

}

void play_sound_routine(void)

{

struct MsgPort    *AHImp_sound     = NULL;
struct AHIRequest *AHIios_sound[2] = {NULL,NULL};
struct AHIRequest *AHIio_sound     = NULL;
BYTE               AHIDevice_sound = -1;
//ULONG sig_sound;

//-----open/setup ahi

    if((AHImp_sound=CreateMsgPort()) != NULL) {
    if((AHIio_sound=(struct AHIRequest *)CreateIORequest(AHImp_sound,sizeof(struct AHIRequest))) != NULL) {
      AHIio_sound->ahir_Version = 4;
      AHIDevice_sound=OpenDevice(AHINAME,0,(struct IORequest *)AHIio_sound,0);
    }
  }

  if(AHIDevice_sound) {
    Printf("Unable to open %s/0 version 4\n",AHINAME);
    goto sound_panic;
  }

  AHIios_sound[0]=AHIio_sound;
  SetIoErr(0);

    AHIios_sound[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127;
    AHIios_sound[0]->ahir_Std.io_Command  = CMD_WRITE;
    AHIios_sound[0]->ahir_Std.io_Data     = snd_buffer[number];//sndbuf;
    AHIios_sound[0]->ahir_Std.io_Length   = sound_file_size[number];//fib_snd.fib_Size;
    AHIios_sound[0]->ahir_Std.io_Offset   = 0;
    AHIios_sound[0]->ahir_Frequency       = 8000;//44100;
    AHIios_sound[0]->ahir_Type            = AHIST_M8S;//AHIST_M16S;
    AHIios_sound[0]->ahir_Volume          = 0x10000;          // Full volume
    AHIios_sound[0]->ahir_Position        = 0x8000;           // Centered
    AHIios_sound[0]->ahir_Link            = NULL;

    DoIO((struct IORequest *) AHIios_sound[0]);

sound_panic:

  //printf("are we on sound_exit?\n");
  if(!AHIDevice_sound)
    CloseDevice((struct IORequest *)AHIio_sound);
  DeleteIORequest((struct IORequest *)AHIio_sound);
  DeleteMsgPort(AHImp_sound);
  sound_player_done = 1;

}

void stop_sound(void)
{

     Signal(&sound_player->pr_Task, SIGBREAKF_CTRL_C );
     while(sound_player_done !=1){};
     sound_player_done=0;
}

void play_sound(int num)

{
      number=num;

         #ifdef __MORPHOS__

         sound_player = CreateNewProcTags(
							NP_Entry, &play_sound_routine,
							NP_Priority, 1,
							NP_Name, "Ahi raw-sound-player Process",
                          //  NP_Input, Input(),
                          //  NP_CloseInput, FALSE,
                          //  NP_Output, Output(),
                          //  NP_CloseOutput, FALSE,

                            NP_CodeType, CODETYPE_PPC,

							TAG_DONE);

         #else

         sound_player = CreateNewProcTags(
							NP_Entry, &play_sound_routine,
							NP_Priority, 1,
							NP_Name, "Ahi raw-sound-player Process",
                          //  NP_Input, Input(),
                          //  NP_CloseInput, FALSE,
                          //  NP_Output, Output(),
                          //  NP_CloseOutput, FALSE,

							TAG_DONE);
         #endif

         Delay(10); // little delay for make sounds finish

}

用於音樂播放的低階方法

These steps will allow you to use low-level AHI functions:
- Create message port and AHIRequest with appropriate functions from exec.library.
- Open the device with OpenDevice() giving AHI_NO_UNIT as a unit.
- Get interface to the library with GetInterface() giving as the first parameter io_Device field of IORequest.

struct AHIIFace *IAHI;
struct Library *AHIBase;
struct AHIRequest *ahi_request;
struct MsgPort *mp;

if (mp = IExec->CreateMsgPort())
{   if (ahi_request = (struct AHIRequest *)IExec->CreateIORequest(mp, sizeof(struct AHIRequest)))
   {
      ahi_request->ahir_Version = 4;
      if (IExec->OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ahi_request, 0) == 0)
      {
         AHIBase = (struct Library *)ahi_request->ahir_Std.io_Device;
         if (IAHI = (struct AHIIFace *)IExec->GetInterface(AHIBase, "main", 1, NULL))
         {
            // Interface got, we can now use AHI functions
            // ...
            // Once we are done we have to drop interface and free resources
            IExec->DropInterface((struct Interface *)IAHI);
         }
         IExec->CloseDevice((struct IORequest *)ahi_request);
      }
      IExec->DeleteIORequest((struct IORequest *)ahi_request);
   }
   IExec->DeleteMsgPort(mp);
}
Once you get the AHI interface, its functions can be used. To start playing sounds you need to allocate audio (optionally you can ask user for Audio mode and frequency). Then you need to Load samples to use with AHI. You do it with AHI_AllocAudio(), AHI_ControlAudio() and AHI_LoadSound().

struct AHIAudioCtrl *ahi_ctrl;

if (ahi_ctrl = IAHI->AHI_AllocAudio(
   AHIA_AudioID, AHI_DEFAULT_ID,
   AHIA_MixFreq, AHI_DEFAULT_FREQ,
   AHIA_Channels, NUMBER_OF_CHANNELS, // the desired number of channels
   AHIA_Sounds, NUMBER_OF_SOUNDS, // maximum number of sounds used
TAG_DONE))
{
   IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, TRUE, TAG_DONE);
   int i;
   for (i = 0; i < NUMBER_OF_SOUNDS; i++)
   {
      // These variables need to be initialized
      uint32 type;
      APTR samplearray;
      uint32 length;
      struct AHISampleInfo sample;

      sample.ahisi_Type = type; 
      // where type is the type of sample, for example AHIST_M8S for 8-bit mono sound
      sample.ahisi_Address = samplearray; 
      // where samplearray must point to sample data
      sample.ahisi_Length = length / IAHI->AHI_SampleFrameSize(type);
      if (IAHI->AHI_LoadSound(i + 1, AHIST_SAMPLE, &sample, ahi_ctrl)) != 0)
      {
         // error while loading sound, cleanup
      }
   }
   // everything OK, play the sounds
   // ...
   // then unload sounds and free the audio
   for (i = 0; i < NUMBER_OF_SOUNDS; i++)
      IAHI->AHI_UnloadSound(i + 1, ahi_ctrl);
   IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, FALSE, TAG_DONE);
   IAHI->AHI_FreeAudio(ahi_ctrl);
}

使用 AHI_SetVol() 設定音量,AHI_SetFreq() 設定頻率,AHI_SetSound() 播放聲音。

#include <devices/ahi.h>
#include <dos/dostags.h>
#include <proto/dos.h>
#include <proto/exec.h>
#include <proto/ptplay.h>

struct UserArgs
{
	STRPTR file;
	LONG   *freq;
};

CONST TEXT Version[] = "$VER: ShellPlayer 1.0 (4.4.06)";

STATIC struct Library *PtPlayBase;
STATIC struct Task *maintask;
STATIC APTR modptr;
STATIC LONG frequency;
STATIC VOLATILE int player_done = 0;

STATIC VOID AbortAHI(struct MsgPort *port, struct IORequest *r1, struct IORequest *r2)
{
	if (!CheckIO(r1))
	{
		AbortIO(r1);
		WaitIO(r1);
	}

	if (!CheckIO(r2))
	{
		AbortIO(r2);
		WaitIO(r2);
	}

	GetMsg(port);
	GetMsg(port);
}

STATIC VOID StartAHI(struct AHIRequest *r1, struct AHIRequest *r2, WORD *buf1, WORD *buf2)
{
	PtRender(modptr, (BYTE *)(buf1), (BYTE *)(buf1+1), 4, frequency, 1, 16, 2);
	PtRender(modptr, (BYTE *)(buf2), (BYTE *)(buf2+1), 4, frequency, 1, 16, 2);

	r1->ahir_Std.io_Command = CMD_WRITE;
	r1->ahir_Std.io_Offset  = 0;
	r1->ahir_Std.io_Data    = buf1;
	r1->ahir_Std.io_Length  = frequency*2*2;
	r2->ahir_Std.io_Command = CMD_WRITE;
	r2->ahir_Std.io_Offset  = 0;
	r2->ahir_Std.io_Data    = buf2;
	r2->ahir_Std.io_Length  = frequency*2*2;

	r1->ahir_Link = NULL;
	r2->ahir_Link = r1;

	SendIO((struct IORequest *)r1);
	SendIO((struct IORequest *)r2);
}

STATIC VOID PlayerRoutine(void)
{
	struct AHIRequest req1, req2;
	struct MsgPort *port;
	WORD *buf1, *buf2;

	buf1 = AllocVec(frequency*2*2, MEMF_ANY);
	buf2 = AllocVec(frequency*2*2, MEMF_ANY);

	if (buf1 && buf2)
	{
		port = CreateMsgPort();

		if (port)
		{
			req1.ahir_Std.io_Message.mn_Node.ln_Pri = 0;
			req1.ahir_Std.io_Message.mn_ReplyPort = port;
			req1.ahir_Std.io_Message.mn_Length = sizeof(req1);
			req1.ahir_Version = 2;

			if (OpenDevice("ahi.device", 0, (struct IORequest *)&req1, 0) == 0)
			{
				req1.ahir_Type           = AHIST_S16S;
				req1.ahir_Frequency      = frequency;
				req1.ahir_Volume         = 0x10000;
				req1.ahir_Position       = 0x8000;

				CopyMem(&req1, &req2, sizeof(struct AHIRequest));

				StartAHI(&req1, &req2, buf1, buf2);

				for (;;)
				{
					struct AHIRequest *io;
					ULONG sigs;

					sigs = Wait(SIGBREAKF_CTRL_C | 1 << port->mp_SigBit);

					if (sigs & SIGBREAKF_CTRL_C)
						break;

					if ((io = (struct AHIRequest *)GetMsg(port)))
					{
						if (GetMsg(port))
						{
							// Both IO request finished, restart

							StartAHI(&req1, &req2, buf1, buf2);
						}
						else
						{
							APTR link;
							WORD *buf;

							if (io == &req1)
							{
								link = &req2;
								buf = buf1;
							}
							else
							{
								link = &req1;
								buf = buf2;
							}

							PtRender(modptr, (BYTE *)buf, (BYTE *)(buf+1), 4, frequency, 1, 16, 2);

							io->ahir_Std.io_Command = CMD_WRITE;
							io->ahir_Std.io_Offset  = 0;
							io->ahir_Std.io_Length  = frequency*2*2;
							io->ahir_Std.io_Data    = buf;
							io->ahir_Link = link;

							SendIO((struct IORequest *)io);
						}
					}
				}

				AbortAHI(port, (struct IORequest *)&req1, (struct IORequest *)&req2);
				CloseDevice((struct IORequest *)&req1);
			}

			DeleteMsgPort(port);
		}
	}

	FreeVec(buf1);
	FreeVec(buf2);

	Forbid();
	player_done = 1;
	Signal(maintask, SIGBREAKF_CTRL_C);
}

int main(void)
{
	struct RDArgs *args;
	struct UserArgs params;

	int rc = RETURN_FAIL;

	maintask = FindTask(NULL);

	args = ReadArgs("FILE/A,FREQ/K/N", (IPTR *)&params, NULL);

	if (args)
	{
		PtPlayBase = OpenLibrary("ptplay.library", 0);

		if (PtPlayBase)
		{
			BPTR fh;

			if (params.freq)
			{
				frequency = *params.freq;
			}

			if (frequency < 4000 || frequency > 96000)
				frequency = 48000;

			fh = Open(params.file, MODE_OLDFILE);

			if (fh)
			{
				struct FileInfoBlock fib;
				APTR buf;

				ExamineFH(fh, &fib);

				buf = AllocVec(fib.fib_Size, MEMF_ANY);

				if (buf)
				{
					Read(fh, buf, fib.fib_Size);
				}

				Close(fh);

				if (buf)
				{
					ULONG type;

					type = PtTest(params.file, buf, 1200);

					modptr = PtInit(buf, fib.fib_Size, frequency, type);

					if (modptr)
					{
						struct Process *player;

						player = CreateNewProcTags(
							NP_Entry, &PlayerRoutine,
							NP_Priority, 1,
							NP_Name, "Player Process",
							#ifdef __MORPHOS__
							NP_CodeType, CODETYPE_PPC,
							#endif
							TAG_DONE);

						if (player)
						{
							rc = RETURN_OK;
							Printf("Now playing \033[1m%s\033[22m at %ld Hz... Press CTRL-C to abort.\n", params.file, frequency);

							do
							{
								Wait(SIGBREAKF_CTRL_C);

								Forbid();
								if (!player_done)
								{
									Signal(&player->pr_Task, SIGBREAKF_CTRL_C);
								}
								Permit();
							}
							while (!player_done);
						}

						PtCleanup(modptr);
					}
					else
					{
						PutStr("Unknown file!\n");
					}
				}
				else
				{
					PutStr("Not enough memory!\n");
				}
			}
			else
			{
				PutStr("Could not open file!\n");
			}

			CloseLibrary(PtPlayBase);
		}

		FreeArgs(args);
	}

	if (rc = RETURN_FAIL)
		PrintFault(IoErr(), NULL);
	return rc;
}

其他示例

[edit | edit source]

主音量實用程式

任何人都可以製作這樣一個相對簡單的實用程式。只需使用主音量結構呼叫 AHI_SetEffect() 即可。您可以使用滑塊建立一個視窗,並輕鬆呼叫該函式。

您寫入 AHI 裝置,AHI 將寫入音效卡、本機硬體,甚至寫入檔案。這些選項由使用者配置。AHI 還執行軟體混音任務,以便可以同時播放多個聲音。

AHI 為音訊提供四個“單元”。這使得程式可以在本機硬體上播放,另一個程式可以在音效卡上播放,方法是將相應的 AHI 驅動程式附加到一個單元編號。對於軟體開發人員,AHI 提供兩種播放音訊的方法。一種是 AUDIO: DOS 裝置。AHI 可以建立一個名為 AUDIO: 的卷,其工作方式類似於 AmigaDOS 卷。您可以直接讀取和寫入資料到該卷,它將透過揚聲器播放。這是編寫 PCM 的最簡單方法,但不是最好的方法。

首先,如果使用者從 mountlist 中刪除了 AUDIO: 條目,您的程式將無法執行,您將收到大量愚蠢的支援問題,例如“我已經安裝了 AHI,為什麼它無法執行?”。更好的選擇是向 AHI 傳送 IORequest。這使您可以在程式執行時控制音量和平衡設定(使用 AUDIO: 您在開啟檔案時設定這些設定,並且無法在不關閉和重新開啟 AUDIO: 的情況下更改它們),並且您可以使用一種稱為雙緩衝的巧妙技巧來提高效率。雙緩衝使您可以在播放另一個音訊緩衝區的同時填充一個音訊緩衝區。這種非同步操作可以防止在速度較慢的系統上出現“卡頓”的音訊。

我們初始化 AHI,然後準備併發送 AHI 請求到 ahi.device。

計算您希望 AHI 從緩衝區讀取的位元組數非常重要。如果您計算錯誤,可能會導致嚴重的崩潰!為此,將 PCM 計數乘以通道數乘以 AHI 緩衝區數。

關於音量和位置的快速說明:AHI 使用一種相當神秘的資料型別,稱為 Fixed。Fixed 數字由 32 位組成:一個符號位、一個 15 位整數部分和一個 16 位小數部分。在構建 AHI 請求時,我將此數字乘以 0x00010000,以將其轉換為 fixed 值。如果我在 DOS 後臺程序中使用此程式碼,我可以動態更改音量和平衡,以便排隊的下一個樣本將以更大聲或更小的聲音播放。也可以中斷 AHI,使更改立即生效,但我不會詳細介紹這一點。

傳送請求後,我們將必需的位放到位,以檢查 CTRL-C 和任何 AHI 中斷訊息。然後是時候交換緩衝區了。

/*
 * Copyright (C) 2005 Mark Olsen
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License
 * as published by the Free Software Foundation; either version 2
 * of the License, or (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
 */

#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>
#include <utility/hooks.h>

#include "../game/q_shared.h"
#include "../client/snd_local.h"

struct AHIdata *ad;

struct AHIChannelInfo
{
	struct AHIEffChannelInfo aeci;
	ULONG offset;
};

struct AHIdata
{
	struct MsgPort *msgport;
	struct AHIRequest *ahireq;
	int ahiopen;
	struct AHIAudioCtrl *audioctrl;
	void *samplebuffer;
	struct Hook EffectHook;
	struct AHIChannelInfo aci;
	unsigned int readpos;
};

#if !defined(__AROS__)
ULONG EffectFunc()
{
	struct Hook *hook = (struct Hook *)REG_A0;
	struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;

	struct AHIdata *ad;

	ad = hook->h_Data;

	ad->readpos = aeci->ahieci_Offset[0];

	return 0;
}

static struct EmulLibEntry EffectFunc_Gate =
{
	TRAP_LIB, 0, (void (*)(void))EffectFunc
};
#else
AROS_UFH3(ULONG, EffectFunc,
          AROS_UFHA(struct Hook *, hook, A0),
          AROS_UFHA(struct AHIAudioCtrl *, aac, A2),
          AROS_UFHA(struct AHIEffChannelInfo *, aeci, A1)
         )
{
    AROS_USERFUNC_INIT
    
	struct AHIdata *ad;

	ad = hook->h_Data;

	ad->readpos = aeci->ahieci_Offset[0];

	return 0;

    AROS_USERFUNC_EXIT
}
#endif

qboolean SNDDMA_Init(void)
{
	ULONG channels;
	ULONG speed;
	ULONG bits;

	ULONG r;

	struct Library *AHIBase;

	struct AHISampleInfo sample;

	cvar_t *sndbits;
	cvar_t *sndspeed;
	cvar_t *sndchannels;

	char modename[64];

	if (ad)
		return;

	sndbits = Cvar_Get("sndbits", "16", CVAR_ARCHIVE);
	sndspeed = Cvar_Get("sndspeed", "0", CVAR_ARCHIVE);
	sndchannels = Cvar_Get("sndchannels", "2", CVAR_ARCHIVE);

	speed = sndspeed->integer;

	if (speed == 0)
		speed = 22050;

	ad = AllocVec(sizeof(*ad), MEMF_ANY);
	if (ad)
	{
		ad->msgport = CreateMsgPort();
		if (ad->msgport)
		{
			ad->ahireq = (struct AHIRequest *)CreateIORequest(ad->msgport, sizeof(struct AHIRequest));
			if (ad->ahireq)
			{
				ad->ahiopen = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ad->ahireq, 0);
				if (ad->ahiopen)
				{
					AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;

					ad->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
					                               AHIA_MixFreq, speed,
					                               AHIA_Channels, 1,
					                               AHIA_Sounds, 1,
					                               TAG_END);

					if (ad->audioctrl)
					{
						AHI_GetAudioAttrs(AHI_INVALID_ID, ad->audioctrl,
						                  AHIDB_BufferLen, sizeof(modename),
						                  AHIDB_Name, (ULONG)modename,
						                  AHIDB_MaxChannels, (ULONG)&channels,
						                  AHIDB_Bits, (ULONG)&bits,
						                  TAG_END);

						AHI_ControlAudio(ad->audioctrl,
						                 AHIC_MixFreq_Query, (ULONG)&speed,
						                 TAG_END);

						if (bits == 8 || bits == 16)
						{
							if (channels > 2)
								channels = 2;

							dma.speed = speed;
							dma.samplebits = bits;
							dma.channels = channels;
#if !defined(__AROS__)
							dma.samples = 2048*(speed/11025);
#else
							dma.samples = 16384*(speed/11025);
#endif
							dma.submission_chunk = 1;

#if !defined(__AROS__)
							ad->samplebuffer = AllocVec(2048*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#else
							ad->samplebuffer = AllocVec(16384*(speed/11025)*(bits/8)*channels, MEMF_ANY);
#endif
							if (ad->samplebuffer)
							{
								dma.buffer = ad->samplebuffer;

								if (channels == 1)
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_M8S;
									else
										sample.ahisi_Type = AHIST_M16S;
								}
								else
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_S8S;
									else
										sample.ahisi_Type = AHIST_S16S;
								}

								sample.ahisi_Address = ad->samplebuffer;
#if !defined(__AROS__)
								sample.ahisi_Length = (2048*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);
#else
								sample.ahisi_Length = (16384*(speed/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);								
#endif

								r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, ad->audioctrl);
								if (r == 0)
								{
									r = AHI_ControlAudio(ad->audioctrl,
									                     AHIC_Play, TRUE,
									                     TAG_END);

									if (r == 0)
									{
										AHI_Play(ad->audioctrl,
										         AHIP_BeginChannel, 0,
										         AHIP_Freq, speed,
										         AHIP_Vol, 0x10000,
										         AHIP_Pan, 0x8000,
										         AHIP_Sound, 0,
										         AHIP_EndChannel, NULL,
										         TAG_END);

										ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
										ad->aci.aeci.ahieci_Func = &ad->EffectHook;
										ad->aci.aeci.ahieci_Channels = 1;

#if !defined(__AROS__)
										ad->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
#else
										ad->EffectHook.h_Entry = (IPTR (*)())&EffectFunc;
#endif
										ad->EffectHook.h_Data = ad;
										AHI_SetEffect(&ad->aci, ad->audioctrl);

										Com_Printf("Using AHI mode \"%s\" for audio output\n", modename);
										Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, speed);

										return 1;
									}
								}
							}
							FreeVec(ad->samplebuffer);
						}
						AHI_FreeAudio(ad->audioctrl);
					}
					else
						Com_Printf("Failed to allocate AHI audio\n");

					CloseDevice((struct IORequest *)ad->ahireq);
				}
				DeleteIORequest((struct IORequest *)ad->ahireq);
			}
			DeleteMsgPort(ad->msgport);
		}
		FreeVec(ad);
	}

	return 0;
}

int SNDDMA_GetDMAPos(void)
{
	return ad->readpos*dma.channels;
}

void SNDDMA_Shutdown(void)
{
	struct Library *AHIBase;

	if (ad == 0)
		return;

	AHIBase = (struct Library *)ad->ahireq->ahir_Std.io_Device;

	ad->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
	AHI_SetEffect(&ad->aci.aeci, ad->audioctrl);
	AHI_ControlAudio(ad->audioctrl,
	                 AHIC_Play, FALSE,
	                 TAG_END);

	AHI_FreeAudio(ad->audioctrl);
	FreeVec(ad->samplebuffer);
	CloseDevice((struct IORequest *)ad->ahireq);
	DeleteIORequest((struct IORequest *)ad->ahireq);
	DeleteMsgPort(ad->msgport);
	FreeVec(ad);

	ad = 0;
}

void SNDDMA_Submit(void)
{
}

void SNDDMA_BeginPainting (void)
{
}
/*
Copyright (C) 2006-2007 Mark Olsen

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either Version 2
of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.
*/

#include <exec/exec.h>
#include <devices/ahi.h>
#include <proto/exec.h>
#define USE_INLINE_STDARG
#include <proto/ahi.h>

#include "quakedef.h"
#include "sound.h"

struct AHIChannelInfo
{
	struct AHIEffChannelInfo aeci;
	ULONG offset;
};

struct ahi_private
{
	struct MsgPort *msgport;
	struct AHIRequest *ahireq;
	struct AHIAudioCtrl *audioctrl;
	void *samplebuffer;
	struct Hook EffectHook;
	struct AHIChannelInfo aci;
	unsigned int readpos;
};

ULONG EffectFunc()
{
	struct Hook *hook = (struct Hook *)REG_A0;
	struct AHIEffChannelInfo *aeci = (struct AHIEffChannelInfo *)REG_A1;

	struct ahi_private *p;

	p = hook->h_Data;

	p->readpos = aeci->ahieci_Offset[0];

	return 0;
}

static struct EmulLibEntry EffectFunc_Gate =
{
	TRAP_LIB, 0, (void (*)(void))EffectFunc
};

void ahi_shutdown(struct SoundCard *sc)
{
	struct ahi_private *p = sc->driverprivate;

	struct Library *AHIBase;

	AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;

	p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO|AHIET_CANCEL;
	AHI_SetEffect(&p->aci.aeci, p->audioctrl);
	AHI_ControlAudio(p->audioctrl,
	                 AHIC_Play, FALSE,
	                 TAG_END);

	AHI_FreeAudio(p->audioctrl);

	CloseDevice((struct IORequest *)p->ahireq);
	DeleteIORequest((struct IORequest *)p->ahireq);

	DeleteMsgPort(p->msgport);

	FreeVec(p->samplebuffer);

	FreeVec(p);
}

int ahi_getdmapos(struct SoundCard *sc)
{
	struct ahi_private *p = sc->driverprivate;

	sc->samplepos = p->readpos*sc->channels;

	return sc->samplepos;
}

void ahi_submit(struct SoundCard *sc, unsigned int count)
{
}

qboolean ahi_init(struct SoundCard *sc, int rate, int channels, int bits)
{
	struct ahi_private *p;
	ULONG r;

	char name[64];

	struct Library *AHIBase;

	struct AHISampleInfo sample;

	p = AllocVec(sizeof(*p), MEMF_ANY);
	if (p)
	{
		p->msgport = CreateMsgPort();
		if (p->msgport)
		{
			p->ahireq = (struct AHIRequest *)CreateIORequest(p->msgport, sizeof(struct AHIRequest));
			if (p->ahireq)
			{
				r = !OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)p->ahireq, 0);
				if (r)
				{
					AHIBase = (struct Library *)p->ahireq->ahir_Std.io_Device;

					p->audioctrl = AHI_AllocAudio(AHIA_AudioID, AHI_DEFAULT_ID,
					                               AHIA_MixFreq, rate,
					                               AHIA_Channels, 1,
					                               AHIA_Sounds, 1,
					                               TAG_END);

					if (p->audioctrl)
					{
						AHI_GetAudioAttrs(AHI_INVALID_ID, p->audioctrl,
						                  AHIDB_BufferLen, sizeof(name),
						                  AHIDB_Name, (ULONG)name,
						                  AHIDB_MaxChannels, (ULONG)&channels,
						                  AHIDB_Bits, (ULONG)&bits,
						                  TAG_END);

						AHI_ControlAudio(p->audioctrl,
						                 AHIC_MixFreq_Query, (ULONG)&rate,
						                 TAG_END);

						if (bits == 8 || bits == 16)
						{
							if (channels > 2)
								channels = 2;

							sc->speed = rate;
							sc->samplebits = bits;
							sc->channels = channels;
							sc->samples = 16384*(rate/11025);

							p->samplebuffer = AllocVec(16384*(rate/11025)*(bits/8)*channels, MEMF_CLEAR);
							if (p->samplebuffer)
							{
								sc->buffer = p->samplebuffer;

								if (channels == 1)
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_M8S;
									else
										sample.ahisi_Type = AHIST_M16S;
								}
								else
								{
									if (bits == 8)
										sample.ahisi_Type = AHIST_S8S;
									else
										sample.ahisi_Type = AHIST_S16S;
								}

								sample.ahisi_Address = p->samplebuffer;
								sample.ahisi_Length = (16384*(rate/11025)*(bits/8))/AHI_SampleFrameSize(sample.ahisi_Type);

								r = AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, p->audioctrl);
								if (r == 0)
								{
									r = AHI_ControlAudio(p->audioctrl,
									                     AHIC_Play, TRUE,
									                     TAG_END);

									if (r == 0)
									{
										AHI_Play(p->audioctrl,
										         AHIP_BeginChannel, 0,
										         AHIP_Freq, rate,
										         AHIP_Vol, 0x10000,
										         AHIP_Pan, 0x8000,
										         AHIP_Sound, 0,
										         AHIP_EndChannel, NULL,
										         TAG_END);

										p->aci.aeci.ahie_Effect = AHIET_CHANNELINFO;
										p->aci.aeci.ahieci_Func = &p->EffectHook;
										p->aci.aeci.ahieci_Channels = 1;

										p->EffectHook.h_Entry = (void *)&EffectFunc_Gate;
										p->EffectHook.h_Data = p;

										AHI_SetEffect(&p->aci, p->audioctrl);

										Com_Printf("Using AHI mode \"%s\" for audio output\n", name);
										Com_Printf("Channels: %d bits: %d frequency: %d\n", channels, bits, rate);

										sc->driverprivate = p;

										sc->GetDMAPos = ahi_getdmapos;
										sc->Submit = ahi_submit;
										sc->Shutdown = ahi_shutdown;

										return 1;
									}
								}
							}
							FreeVec(p->samplebuffer);
						}
						AHI_FreeAudio(p->audioctrl);
					}
					else
						Com_Printf("Failed to allocate AHI audio\n");

					CloseDevice((struct IORequest *)p->ahireq);
				}
				DeleteIORequest((struct IORequest *)p->ahireq);
			}
			DeleteMsgPort(p->msgport);
		}
		FreeVec(p);
	}

	return 0;
}

SoundInitFunc AHI_Init = ahi_init;

鉤子

[edit | edit source]

一箇舊的想法,如果可能,最好避免。鉤子函式應用於播放/控制樣本。它以初始化的頻率(在本例中為 100)呼叫。

因此,在您的“普通”程式碼中,您將在某個地方翻轉一個開關,告訴鉤子函式開始播放樣本(或對其執行您想要的任何操作)。

然後,在鉤子函式中,您開始播放樣本,並使用 ahi ctrl 函式(以及其他函式)應用效果。

例如,一個這樣的例子可能是模組(現在是 .mod 檔案格式)資料正在為每個通道處理,並應用效果等。

在您的情況下,它將更像是 mod 播放器。您想要開始播放一個音符,並在需要時停止它。例如,當計數器達到某個值時。

您的數字未列印的(可能)原因是該例程每秒被呼叫_很多_次

關鍵是您必須找到一種機制(最適合您的目的),該機制使用滑鼠點選(或按鍵點選)來“饋送”播放器(鉤子函式),並找到一種機制,讓播放器對播放的樣本執行其他操作(停止它、應用效果等)。

您可以使用鉤子函式的資料屬性將結構“傳遞”/推送到您的“回放”例程,以便您可以例如告訴播放器某個樣本已開始播放。然後,播放器可以決定(例如,如果計數器達到某個值)實際停止播放樣本,並在該結構中設定/更改狀態,以便主程式知道可以“播放”/再次觸發樣本。

#include "backends/platform/amigaos3/amigaos3.h"
#include "backends/mixer/amigaos3/amigaos3-mixer.h"
#include "common/debug.h"
#include "common/system.h"
#include "common/config-manager.h"
#include "common/textconsole.h"

// Amiga includes
#include <clib/exec_protos.h>
#include "ahi-player-hook.h"

#define DEFAULT_MIX_FREQUENCY 11025

AmigaOS3MixerManager* g_mixerManager;

static void audioPlayerCallback() {
     g_mixerManager->callbackHandler();
}

AmigaOS3MixerManager::AmigaOS3MixerManager()
	:
	_mixer(0),
	_audioSuspended(false) {

    g_mixerManager = this;
}

AmigaOS3MixerManager::~AmigaOS3MixerManager() {
	if (_mixer) {
        _mixer->setReady(false);

        if (audioCtrl) {
            debug(1, "deleting AHI_ControlAudio");
    
            // Stop sounds.
            AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
    
            if (_mixer) {
                _mixer->setReady(false);
            }
    
            AHI_UnloadSound(0, audioCtrl);
            AHI_FreeAudio(audioCtrl);
            audioCtrl = NULL;
        }
    
        if (audioRequest) {
            debug(1, "deleting AHIDevice");
            CloseDevice((struct IORequest*)audioRequest);
            DeleteIORequest((struct IORequest*)audioRequest);
            audioRequest = NULL;
    
            DeleteMsgPort(audioPort);
            audioPort = NULL;
            AHIBase = NULL;
        }
    
        if (sample.ahisi_Address) {
            debug(1, "deleting soundBuffer");
            FreeVec(sample.ahisi_Address);
            sample.ahisi_Address = NULL;
        }
    
    	delete _mixer;
    }
}

void AmigaOS3MixerManager::init() {
    
    audioPort = (struct MsgPort*)CreateMsgPort();
    if (!audioPort) {
        error("Could not create a Message Port for AHI");
    }

    audioRequest = (struct AHIRequest*)CreateIORequest(audioPort, sizeof(struct AHIRequest));
    if (!audioRequest) {
        error("Could not create an IO Request for AHI");
    }

    // Open at least version 4.
    audioRequest->ahir_Version = 4;

    BYTE deviceError = OpenDevice(AHINAME, AHI_NO_UNIT, (struct IORequest*)audioRequest, NULL);
    if (deviceError) {
        error("Unable to open AHI Device: %s version 4", AHINAME);
    }

    // Needed by Audio Control?
    AHIBase = (struct Library *)audioRequest->ahir_Std.io_Device;

    uint32 desiredMixingfrequency = 0;

	// Determine the desired output sampling frequency.
	if (ConfMan.hasKey("output_rate")) {
		desiredMixingfrequency = ConfMan.getInt("output_rate");
    }
    
    if (desiredMixingfrequency == 0) {
		desiredMixingfrequency = DEFAULT_MIX_FREQUENCY;
    }
    
    ULONG audioId = AHI_DEFAULT_ID;
    
    audioCtrl = AHI_AllocAudio(
      AHIA_AudioID, audioId,
      AHIA_MixFreq, desiredMixingfrequency,
      AHIA_Channels, numAudioChannels,
      AHIA_Sounds, 1,
      AHIA_PlayerFunc, createAudioPlayerCallback(audioPlayerCallback),
      AHIA_PlayerFreq, audioCallbackFrequency<<16,
      AHIA_MinPlayerFreq, audioCallbackFrequency<<16,
      AHIA_MaxPlayerFreq, audioCallbackFrequency<<16,
      TAG_DONE);

    if (!audioCtrl) {
        error("Could not create initialize AHI");
    }
    
    // Get obtained mixing frequency.
    ULONG obtainedMixingfrequency = 0;
    AHI_ControlAudio(audioCtrl, AHIC_MixFreq_Query, (Tag)&obtainedMixingfrequency, TAG_DONE);
    debug(5, "Mixing frequency desired = %d Hz", desiredMixingfrequency);
    debug(5, "Mixing frequency obtained = %d Hz", obtainedMixingfrequency);

    // Calculate the sample factor.
    ULONG sampleCount = (ULONG)floor(obtainedMixingfrequency / audioCallbackFrequency);
    debug(5, "Calculated sample rate @ %u times per second  = %u", audioCallbackFrequency, sampleCount);  
    
    // 32 bits (4 bytes) are required per sample for storage (16bit stereo).
    sampleBufferSize = (sampleCount * AHI_SampleFrameSize(AHIST_S16S));

    sample.ahisi_Type = AHIST_S16S;
    sample.ahisi_Address = AllocVec(sampleBufferSize, MEMF_PUBLIC|MEMF_CLEAR);
    sample.ahisi_Length = sampleCount;

    AHI_SetFreq(0, obtainedMixingfrequency, audioCtrl, AHISF_IMM);
    AHI_SetVol(0, 0x10000L, 0x8000L, audioCtrl, AHISF_IMM);
 
    AHI_LoadSound(0, AHIST_DYNAMICSAMPLE, &sample, audioCtrl);
    AHI_SetSound(0, 0, 0, 0, audioCtrl, AHISF_IMM);    
        
    // Create the mixer instance and start the sound processing.
    assert(!_mixer);
	_mixer = new Audio::MixerImpl(g_system, obtainedMixingfrequency);
	assert(_mixer);
    _mixer->setReady(true);
        
          
    // Start feeding samples to sound hardware (and start the AHI callback!)
    AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
}

void AmigaOS3MixerManager::callbackHandler() {
	assert(_mixer);
	
	_mixer->mixCallback((byte*)sample.ahisi_Address, sampleBufferSize);
}

void AmigaOS3MixerManager::suspendAudio() {
	AHI_ControlAudio(audioCtrl, AHIC_Play, FALSE, TAG_DONE);
	
	_audioSuspended = true;
}

int AmigaOS3MixerManager::resumeAudio() {
	if (!_audioSuspended) {
		return -2;
    }
	
    AHI_ControlAudio(audioCtrl, AHIC_Play, TRUE, TAG_DONE);
    
	_audioSuspended = false;
	
	return 0;
}

AmiArcadia 也使用 AHI,並且包含來自 AmiNet 的 c 原始碼和 ScummVM AGA....所有原始碼都在這裡。要建立 AHI 回撥鉤子,您還需要包含 SDI 標頭檔案

參考文獻

[edit | edit source]

仍需編輯,可能需要重新制作...

您可能需要提供 AHI_MinPlayerFreq 和 AHI_MaxPlayerFreq?

AHIA_PlayerFreq (固定) - 如果非零,則啟用計時並指定 PlayerFunc 每秒呼叫多少次。如果指定了 AHIA_PlayerFunc,則必須指定此值。不要使用任何極端頻率。MixFreq/PlayerFreq 的結果必須適合 UWORD,即它必須小於或等於 65535。建議您將結果保持在 80 以上。在正常使用中,這應該不成問題。注意資料型別是 Fixed,而不是整數。50 Hz 是 50 16。

預設值是合理的。不要依賴它。

AHIA_MinPlayerFreq (固定) - 您將使用的最小頻率 (AHIA_PlayerFreq)。如果您使用裝置的中斷功能,則必須提供此值!

AHIA_MaxPlayerFreq (固定) - 您將使用的最大頻率 (AHIA_PlayerFreq)。如果您使用裝置的中斷功能,則必須提供此值!

我在文件中沒有看到任何限制高階頻率的內容,只要記住 AHI 必須能夠及時完成回撥函式,才能進行下一個回撥。您的回撥函式有多大?

AHI_GetAudioAttrs() 應該有 TAG_DONE

什麼是 AHIR_DoMixFreq?從 AHI_AllocAudio() 呼叫中刪除 AHIR_DoMixFreq 標籤,我認為它不應該在那裡。

將音訊解碼到樣本緩衝區,並使用來自子程序的正常雙緩衝方法將緩衝區饋送到 AHI。AHI 可以將聲音緩衝在音樂中斷中,以便稍後播放嗎?我該如何做您建議的事情?從未使用過庫 API,抱歉。我一直使用 CMD_WRITE 來播放聲音。它不起作用。遲早 io 請求會由於任務切換而不同步。

關於通道數量的建議 設定

CMD_FLUSH
CMD_READ
CMD_RESET
CMD_START
CMD_STOP
CMD_WRITE

CloseDevice
NSCMD_DEVICEQUERY
OpenDevice
ahi.device
if(AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs))
//if(AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_Inputs, &num_inputs))
{
printf("getaudioattrs worked\n");

STRPTR input_name[num_inputs-1][100];

printf("num inputs is %i\navailable inputs:\n",num_inputs);

for(int a=0; a!=num_inputs; a++)
{
//AHI_GetAudioAttrs(AHI_INVALID_ID, Record_AudioCtrl, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]);
AHI_GetAudioAttrs(AHI_DEFAULT_ID, NULL, AHIDB_BufferLen, 100, AHIDB_InputArg, a, AHIDB_Input, &input_name[a]);
printf("%i: %s\n",a,input_name[a]);
}

//AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, &selected_input, TAG_DONE);
//AHI_ControlAudio(Record_AudioCtrl, AHIC_Input, 1, TAG_DONE);
//AHI_ControlAudio(NULL, AHIC_Input, 1, TAG_DONE);
//AHI_ControlAudio(NULL, AHIC_Input, &selected_input, TAG_DONE);

}
//changed second argument from..

AHIDevice=OpenDevice(AHINAME,0,(struct IORequest *)AHIio,NULL);

//to this..

AHIDevice=OpenDevice(AHINAME,AHI_NO_UNIT,(struct IORequest *)AHIio,NULL); 

AHI_AllocAudioA

[編輯 | 編輯原始碼]
audioctrl = AHI_AllocAudioA( tags );
struct AHIAudioCtrl *AHI_AllocAudioA( struct TagItem * );
audioctrl = AHI_AllocAudio( tag1, ... );
struct AHIAudioCtrl *AHI_AllocAudio( Tag, ... );

AHI_AllocAudioRequestA

[編輯 | 編輯原始碼]
requester = AHI_AllocAudioRequestA( tags );
struct AHIAudioModeRequester *AHI_AllocAudioRequestA(struct TagItem * );
requester = AHI_AllocAudioRequest( tag1, ... );
struct AHIAudioModeRequester *AHI_AllocAudioRequest( Tag, ... );

AHI_AudioRequestA

[編輯 | 編輯原始碼]
success = AHI_AudioRequestA( requester, tags );
BOOL AHI_AudioRequestA( struct AHIAudioModeRequester *, struct TagItem * );
result = AHI_AudioRequest( requester, tag1, ... );
BOOL AHI_AudioRequest( struct AHIAudioModeRequester *, Tag, ... );

AHI_BestAudioIDA

[編輯 | 編輯原始碼]
ID = AHI_BestAudioIDA( tags );
ULONG AHI_BestAudioIDA( struct TagItem * );
ID = AHI_BestAudioID( tag1, ... );
ULONG AHI_BestAudioID( Tag, ... );

AHI_ControlAudioA

[編輯 | 編輯原始碼]
error = AHI_ControlAudioA( audioctrl, tags );
ULONG AHI_ControlAudioA( struct AHIAudioCtrl *, struct TagItem * );
error = AHI_ControlAudio( AudioCtrl, tag1, ...);
ULONG AHI_ControlAudio( struct AHIAudioCtrl *, Tag, ... );

AHI_FreeAudio

[編輯 | 編輯原始碼]
AHI_FreeAudio( audioctrl );
void AHI_FreeAudio( struct AHIAudioCtrl * );

AHI_FreeAudioRequest

[編輯 | 編輯原始碼]
AHI_FreeAudioRequest( requester );
void AHI_FreeAudioRequest( struct AHIAudioModeRequester * );

AHI_GetAudioAttrsA

[編輯 | 編輯原始碼]
success = AHI_GetAudioAttrsA( ID, [audioctrl], tags );
BOOL AHI_GetAudioAttrsA( ULONG, struct AHIAudioCtrl *, struct TagItem * );
success = AHI_GetAudioAttrs( ID, [audioctrl], attr1, &result1, ...);
BOOL AHI_GetAudioAttrs( ULONG, struct AHIAudioCtrl *, Tag, ... );

AHI_LoadSound

[編輯 | 編輯原始碼]
error = AHI_LoadSound( sound, type, info, audioctrl );
ULONG AHI_LoadSound( UWORD, ULONG, IPTR, struct AHIAudioCtrl * );

AHI_NextAudioID

[編輯 | 編輯原始碼]
next_ID = AHI_NextAudioID( last_ID );
ULONG AHI_NextAudioID( ULONG );

AHI_PlayA

[編輯 | 編輯原始碼]
AHI_PlayA( audioctrl, tags );
void AHI_PlayA( struct AHIAudioCtrl *, struct TagItem * );
AHI_Play( AudioCtrl, tag1, ...);
void AHI_Play( struct AHIAudioCtrl *, Tag, ... );

AHI_SampleFrameSize

[編輯 | 編輯原始碼]
size = AHI_SampleFrameSize( sampletype );
ULONG AHI_SampleFrameSize( ULONG );

AHI_SetEffect

[編輯 | 編輯原始碼]
error = AHI_SetEffect( effect, audioctrl );
ULONG AHI_SetEffect( IPTR, struct AHIAudioCtrl * );

AHI_SetFreq

[編輯 | 編輯原始碼]
AHI_SetFreq( channel, freq, audioctrl, flags );
void AHI_SetFreq( UWORD, ULONG, struct AHIAudioCtrl *, ULONG );

AHI_SetSound

[編輯 | 編輯原始碼]
AHI_SetSound( channel, sound, offset, length, audioctrl, flags );
void AHI_SetSound( UWORD, UWORD, ULONG, LONG, struct AHIAudioCtrl *, ULONG );

AHI_SetVol

[編輯 | 編輯原始碼]
AHI_SetVol( channel, volume, pan, audioctrl, flags );
void AHI_SetVol( UWORD, Fixed, sposition, struct AHIAudioCtrl *, ULONG );

AHI_UnloadSound

[編輯 | 編輯原始碼]
AHI_UnloadSound( sound, audioctrl );
void AHI_UnloadSound( UWORD, struct AHIAudioCtrl * );
華夏公益教科書