summary refs log tree commit diff stats
path: root/results/scraper/launchpad/1815721
blob: 11caa1a3b9f25e87e26da525a0f51f0d9a4ad9b5 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
RISC-V PLIC enable interrupt for multicore

Hello all,

There is a bug in Qemu related to the enabling of external interrupts for multicores (Virt machine). 

After correcting Qemu as described in #1815078  (https://bugs.launchpad.net/qemu/+bug/1815078), when we try to enable interrupts for core 1 at address 0x0C00_2080 we don't seem to be able to trigger an external interrupt  (e.g. UART0).

This works perfectly for core 0, but fore core 1 it does not work at all. I assume that given bug #1815078 does not enable any external interrupt then this feature has not been tested. I tried to look at the qemu source code but with no luck so far.

I guess the problem is related to function parse_hart_config (in sfive_plic.c) that initializes incorrectly the plic->addr_config[addrid].hartid, which is later on read in sifive_plic_update. But this is a guess.

Best regards,
Pharos team

Hi,

After some debugging (and luck), the problem (at least in the Virt board) was that the PLIC code inside QEMU addresses the core x 2 instead of just the core (core=hart). That is why it worked for core 0 (0x2 = 0) but for core 1 it has to address the PLIC memory area for core 2.

For example, the interrupt enable address for core 1 starts at offset 0x002080 (see https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc) but we actually have to change the enable bit for core 2 (at 0x002100) to make to work for core 1.

The same is true for the priority threshold and claim complete registers (we need to multiply the core by 2)

Either the documentation at https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc does not have the correct memory addresses for qemu virt board, or qemu appears to be wrong.

On Tue, Mar 24, 2020 at 4:20 PM RTOS Pharos <email address hidden> wrote:
>
> Hi,
>
> After some debugging (and luck), the problem (at least in the Virt
> board) was that the PLIC code inside QEMU addresses the core x 2 instead
> of just the core (core=hart). That is why it worked for core 0 (0x2 = 0)
> but for core 1 it has to address the PLIC memory area for core 2.
>
> For example, the interrupt enable address for core 1 starts at offset
> 0x002080 (see https://github.com/riscv/riscv-plic-spec/blob/master
> /riscv-plic.adoc) but we actually have to change the enable bit for core
> 2 (at 0x002100) to make to work for core 1.


https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc says:

"base + 0x002080: Enable bits for sources 0-31 on context 1"

This is context 1, not core 1.

It looks to me you were running an image built for SiFive FU540.
Please test your image against "sifive_u" machine instead.

>
> The same is true for the priority threshold and claim complete registers
> (we need to multiply the core by 2)
>
> Either the documentation at https://github.com/riscv/riscv-plic-
> spec/blob/master/riscv-plic.adoc does not have the correct memory
> addresses for qemu virt board, or qemu appears to be wrong.
>
> --

Regards,
Bin


Thank you for the explanation. I actually built it for "Virt" machine. I'll try the "sifive_u" when I can. 

But I guess your explanation is correct so this bug could be closed from my part.

Hello as far as I can tell, there is a major problem with PLIC implementation. When decompiling DTB on virt board with X harts, I see that hartid 0 has MEI and SEI, hartid 1 has MEI and SEI, etc... But when configuring context 1 (hartid 0 SEI) no interrupt is generated, but context 0, 2, 4 etc... work. So for me the problem is within PLIC or RISC-V implementation... If anyone wants to correct it, I can help. Best regards. Serge Teodori

I'm going to close this bug as it seems like the issue that RTOS Pharos raised is not an issue.

@Teodori Serge please open a new issue if you have a bug. Make sure to include as much detail as possible and steps to reproduce it.