【编译原理】小型语法编译器-Gradio界面设计

前言

本文部分内容来自网上搜集与个人实践。如果任何信息存在错误,欢迎读者批评指正。本文仅用于学习交流,不用作任何商业用途。
欢迎订阅专栏Gradio

文章目录

  • 前言
    • all/gui.py
    • lexical_analysis.py
      • 导入库
      • 定义辅助函数 `analyze_token`
      • 定义词法分析函数 `lexical_analysis`
      • 测试代码
      • 总结
    • ll1_analysis.py
      • 定义 `Type` 类
      • 判断是否是终结符
      • 初始化函数
      • 打印分析栈和剩余字符
      • 分析函数
      • LL1 分析函数
      • 测试代码
      • 总结
    • lr0_analysis.py
      • 导入库和定义函数
      • 代码解释
        • 初始化 LR(0) 分析表和其他变量
        • 辅助函数
        • 主分析逻辑
      • 总结
    • operator_precedence_analysis.py
      • 导入库和定义函数
      • 代码解释
        • 定义优先级表和辅助函数
        • 初始化输出字符串
        • 主分析逻辑
      • 总结
    • 完整代码
      • all/analysis_functions/lexical_analysis.py
      • all/analysis_functions/ll1_analysis.py
      • all/analysis_functions/lr0_analysis.py
      • all/analysis_functions/operator_precedence_analysis.py
      • all/gui.py

all/gui.py

# -*- coding: utf-8 -*-
import gradio as grfrom all.analysis_functions.lexical_analysis import lexical_analysis
from all.analysis_functions.ll1_analysis import ll1_analysis
from all.analysis_functions.lr0_analysis import lr0_analysis
from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis# Define your input and output components for the Gradio interface
with gr.Blocks() as demo:# 词法分析标签with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["""int a = 10;float b = 5.5;if (a < b) {cout << "a is less than b" << endl;}else if (a == b){cout << "a is equal to b" <<endl;}else {cout << "a is greater than b" << endl;)"""],["""int x = 5; int y = 10; int z = x + y;"""],["""for (int i = 0; i < 10; i++) {cout << i << endl;}"""],["""int a = 5;int b = 10;int c = a * b;cout << c << endl;"""],],inputs=lex_input)# LL1语法分析标签with gr.Tab("LL1语法分析"):# ll1_grammar_input = gr.components.Textbox(label="请输入文法")ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")# ll1_button.click(ll1_analysis, inputs=[ll1_grammar_input, ll1_sentence_input], outputs=ll1_output)ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)# 算符优先语法分析标签with gr.Tab("算符优先语法分析"):# op_grammar_input = gr.components.Textbox(label="请输入文法")op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")# op_button.click(operator_precedence_analysis, inputs=[op_grammar_input, op_sentence_input], outputs=op_output)op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)# LR0语法分析标签with gr.Tab("LR0语法分析"):# lr0_grammar_input = gr.components.Textbox(label="请输入文法")lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")# lr0_button.click(lr0_analysis, inputs=[lr0_grammar_input, lr0_sentence_input], outputs=lr0_output)lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
demo.launch(share=True)

这个代码片段展示了一个使用 Gradio 库构建的小型语法编译器的界面。这个编译器包含四个主要功能:词法分析、LL1 语法分析、算符优先语法分析和 LR0 语法分析。每个功能都在一个单独的标签页中实现,用户可以在这些标签页中输入源代码或句子,并点击按钮进行分析。以下是代码的详细解释:

  1. 导入库和函数

    import gradio as gr
    from all.analysis_functions.lexical_analysis import lexical_analysis
    from all.analysis_functions.ll1_analysis import ll1_analysis
    from all.analysis_functions.lr0_analysis import lr0_analysis
    from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis
    
  2. 创建 Gradio 界面

    with gr.Blocks() as demo:
    
  3. 词法分析标签

    • 输入组件:lex_input 用于输入源代码。
    • 输出组件:lex_output 用于显示分析结果。
    • 按钮组件:lex_button 用于触发词法分析。
    • 示例:提供了一些示例代码,用户可以点击这些示例进行快速测试。
    with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():gr.Examples(examples=[["int a = 10;float b = 5.5;if (a < b) { cout << \"a is less than b\" << endl;}else if (a == b){ cout << \"a is equal to b\" <<endl;}else { cout << \"a is greater than b\" << endl;}"],["int x = 5; int y = 10; int z = x + y;"],["for (int i = 0; i < 10; i++) { cout << i << endl; }"],["int a = 5; int b = 10; int c = a * b; cout << c << endl;"],],inputs=lex_input)
    
  4. LL1 语法分析标签

    • 输入组件:ll1_sentence_input 用于输入句子。
    • 输出组件:ll1_output 用于显示分析结果。
    • 按钮组件:ll1_button 用于触发 LL1 语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("LL1语法分析"):ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)
    
  5. 算符优先语法分析标签

    • 输入组件:op_sentence_input 用于输入句子。
    • 输出组件:op_output 用于显示分析结果。
    • 按钮组件:op_button 用于触发算符优先语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("算符优先语法分析"):op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)
    
  6. LR0 语法分析标签

    • 输入组件:lr0_sentence_input 用于输入句子。
    • 输出组件:lr0_output 用于显示分析结果。
    • 按钮组件:lr0_button 用于触发 LR0 语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("LR0语法分析"):lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
    
  7. 启动 Gradio 界面

    demo.launch(share=True)
    

这个代码片段展示了如何使用 Gradio 库创建一个交互式的语法编译器界面,用户可以通过简单的图形界面进行词法分析和语法分析。

lexical_analysis.py

这个 lexical_analysis.py 文件实现了一个简单的词法分析器,用于将输入的源代码分割成词法单元(tokens),并对每个词法单元进行分类和标记。以下是对代码的详细解释:

导入库

import re

re 模块是 Python 的正则表达式库,用于模式匹配和字符串操作。

定义辅助函数 analyze_token

def analyze_token(word, token_map, result):if word in token_map:result.append("(保留字--{},{})".format(token_map[word], word))elif word == ";":result.append("(分号--26,{})".format(word))elif word == "=":result.append("(等号--17,{})".format(word))elif word == "+":result.append("(加号--13,{})".format(word))elif word == "-":result.append("(减号--14,{})".format(word))elif word == "*":result.append("(乘号--15,{})".format(word))elif word == "/":result.append("(除号--16,{})".format(word))elif word == "<":result.append("(小于--20,{})".format(word))elif word == ">":result.append("(大于--24,{})".format(word))elif word == "==":result.append("(等于--22,{})".format(word))elif word == "!=":result.append("(不等于--23,{})".format(word))elif word == "<=":result.append("(小于等于--21,{})".format(word))elif word == ">=":result.append("(大于等于--25,{})".format(word))elif word.isdigit():result.append("(整数--11,{})".format(word))else:if word.isalpha() and word[0].isalpha():result.append("(标识符--10,{})".format(word))

这个函数用于对单个词法单元进行分类和标记。它根据 token_map 字典中的定义和一些硬编码的规则来判断词法单元的类型,并将结果添加到 result 列表中。

定义词法分析函数 lexical_analysis

def lexical_analysis(input_code):token_map = {"int": 1,"float": 2,"double": 1,"char": 1,"if": 3,"then": 1,"else": 1,"switch": 4,"case": 1,"break": 1,"continue": 1,"while": 5,"do": 6,"for": 1}lines = input_code.splitlines()result = []for line in lines:words = re.findall(r'[A-Za-z_]+|\d+|\S', line)  # Split words based on alphabets, digits, or non-whitespace charactersfor word in words:pos = re.search(r'[;=\+\-\*/<>]', word)  # Find operator symbols in the wordif pos is not None:start = pos.start()if start != 0:analyze_token(word[:start], token_map, result)op = word[start]  # Get operator symbolanalyze_token(op, token_map, result)if start < len(word) - 1:analyze_token(word[start + 1:], token_map, result)else:analyze_token(word, token_map, result)return "Lexical Tokens: " + ", ".join(result)

这个函数实现了词法分析的主要逻辑:

  1. 定义了一个 token_map 字典,用于存储保留字及其对应的标记。
  2. 将输入代码按行分割。
  3. 使用正则表达式将每行代码分割成单词和符号。
  4. 对每个单词和符号进行分析,调用 analyze_token 函数进行分类和标记。
  5. 最终返回一个包含所有词法单元及其标记的字符串。

测试代码

# a="""
# int a = 10;float b = 5.5;if (a < b) {
# cout << "a is less than b" << endl;}else if (a == b){
# cout << "a is equal to b" <<endl;}else {
# cout << "a is greater than b" << endl;)
#
# """
# print(lexical_analysis(a))

这个部分的代码被注释掉了,但它展示了如何使用 lexical_analysis 函数对一段代码进行词法分析,并打印结果。

总结

这个词法分析器通过正则表达式和预定义的规则将输入代码分割成词法单元,并对每个词法单元进行分类和标记。它可以识别保留字、运算符、分号、整数和标识符。分析结果以字符串的形式返回,包含每个词法单元及其对应的标记。

ll1_analysis.py

这个 ll1_analysis.py 文件实现了一个 LL(1) 语法分析器,用于分析输入的表达式并输出分析过程。以下是对代码的详细解释:

定义 Type

class Type:def __init__(self):self.origin = 'N'  # 产生式左侧字符 大写字符self.array = ""  # 产生式右边字符self.length = 0  # 字符个数def init(self, a, b):self.origin = aself.array = bself.length = len(self.array)

Type 类用于存储产生式的信息,包括左侧字符、右侧字符和字符的长度。

判断是否是终结符

def is_terminator(c):# 判断是否是终结符v1 = "i+*()#"return c in v1

这个函数用于判断一个字符是否是终结符。

初始化函数

def init(exp):ridx = 0len_exp = len(exp)rest_stack = list(exp)return ridx, len_exp, rest_stack

这个函数用于初始化分析过程中的一些变量。

打印分析栈和剩余字符

def print_stack(analyze_stack, top, ridx, len_exp, rest_stack):output = ''.join(analyze_stack[:top + 1]) + ' ' * (20 - top)for i in range(ridx):output += ' 'for i in range(ridx, len_exp):output += rest_stack[i]output += '\t\t\t'return output

这个函数用于格式化输出分析栈和剩余字符。

分析函数

def analyze(exp):v1 = "i+*()#"  # 终结符v2 = "EGTSF"  # 非终结符e = Type()t = Type()g = Type()g1 = Type()s = Type()s1 = Type()f = Type()f1 = Type()e.init('E', "TG")t.init('T', "FS")g.init('G', "+TG")g1.init('G', "^")s.init('S', "*FS")s1.init('S', "^")f.init('F', "(E)")f1.init('F', "i")C = [[Type() for _ in range(10)] for _ in range(10)]C[0][0] = C[0][3] = eC[1][1] = gC[1][4] = C[1][5] = g1C[2][0] = C[2][3] = tC[3][2] = sC[3][4] = C[3][5] = C[3][1] = s1C[4][0] = f1C[4][3] = fanalyze_stack = [' ' for _ in range(20)]output_str = ""ridx, len_exp, rest_stack = init(exp)top = 0analyze_stack[top] = '#'top += 1analyze_stack[top] = 'E'  # '#','E'进栈output_str += "步骤\t\t分析栈 \t\t\t\t\t剩余字符 \t\t\t\t所用产生式\n"k = 0while True:ch = rest_stack[ridx]x = analyze_stack[top]top -= 1  # x为当前栈顶字符output_str += str(k + 1).ljust(8)if x == '#':output_str += "分析成功!AC!\n"  # 接受return output_strif is_terminator(x):if x == ch:  # 匹配上了output_str += print_stack(analyze_stack, top, ridx, len_exp, rest_stack) + ch + "匹配\n"ridx += 1  # 下一个输入字符else:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误终结符为" + ch + "\n"  # 输出出错终结符return output_strelse:  # 非终结符处理m = v2.find(x) if x in v2 else -1  # 非终结符下标n = v1.find(ch) if ch in v1 else -1  # 终结符下标if m == -1 or n == -1:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误非终结符为" + x + "\n"return output_strelif C[m][n].origin == 'N':  # 无产生式output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,无法找到对应的产生式\n"  # 输出无产生式错误return output_strelse:  # 有产生式length = C[m][n].lengthtemp = C[m][n].arrayoutput_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + x + "->" + temp + "\n"  # 输出所用产生式for i in range(length - 1, -1, -1):if temp[i] != '^':top += 1analyze_stack[top] = temp[i]  # 将右端字符逆序进栈k += 1

这个函数实现了 LL(1) 语法分析的主要逻辑:

  1. 定义了终结符和非终结符。
  2. 定义并初始化了各种产生式。
  3. 构建了预测分析表 C
  4. 初始化分析栈,并将 #E 压入栈中。
  5. 循环处理输入的表达式,进行匹配和产生式替换,直到分析成功或出错。

LL1 分析函数

def ll1_analysis(exp):output_str = analyze(exp)expressions = ["i+i*i#","i*i+i#","i+i*(i+i)#","i#"]return output_str

这个函数调用 analyze 函数对输入的表达式进行分析,并返回分析结果。

测试代码

# if __name__ == "__main__":
#     exp = "i+i*i#"
#     output = ll1_analysis(exp)
#     print(output)

这个部分的代码被注释掉了,但它展示了如何使用 ll1_analysis 函数对一段表达式进行 LL(1) 语法分析,并打印结果。

总结

这个 LL(1) 语法分析器通过定义产生式和预测分析表,实现了对输入表达式的语法分析。它可以识别终结符和非终结符,并根据预测分析表进行匹配和替换,最终输出分析过程和结果。

lr0_analysis.py

这个 lr0_analysis.py 文件实现了一个 LR(0) 语法分析器,用于分析输入的字符串并输出分析过程。以下是对代码的详细解释:

导入库和定义函数

def lr0_analysis(input_string):LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈cmp = ['#']   # 符号栈cod = '0'   # 状态栈对应输出字符串signal = ''   # 符号栈对应输出字符串sti = '#'   # 符号栈对应输出字符串def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return numoutput = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"LR = 0while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output# input_string = "bccd#"
# output = analyze_string(input_string)
# print(output)

代码解释

初始化 LR(0) 分析表和其他变量
LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签
del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度
head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈
cmp = ['#']   # 符号栈
cod = '0'   # 状态栈对应输出字符串
signal = ''   # 符号栈对应输出字符串
sti = '#'   # 符号栈对应输出字符串
辅助函数
def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return num
主分析逻辑
output = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"
LR = 0
while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output

总结

这个 LR(0) 语法分析器通过定义 LR(0) 分析表和相关的辅助函数,实现了对输入字符串的语法分析。它可以识别输入字符串中的终结符和非终结符,并根据 LR(0) 分析表进行匹配和替换,最终输出分析过程和结果。

operator_precedence_analysis.py

这个 operator_precedence_analysis.py 文件实现了一个算符优先分析器,用于分析输入的表达式并输出分析过程。以下是对代码的详细解释:

导入库和定义函数

def operator_precedence_analysis(input_str):priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]output_str = ""output_str += "文法为:\n"output_str += "(0)E'->#E#\n"output_str += "(1)E->E+T\n"output_str += "(2)E->T\n"output_str += "(3)T->T*F\n"output_str += "(4)T->F\n"output_str += "(5)F->(E)\n"output_str += "(6)F->i\n"output_str += "-----------------------------------------\n"output_str += "           算符优先关系表                \n"output_str += "     +   *   i   (   )   #               \n"output_str += " +   >   <   <   <   >   >               \n"output_str += " *   >   >   <   <   >   >               \n"output_str += " i   >   >           >   >               \n"output_str += " (   <   <   <   <   =                   \n"output_str += " )   >   >           >   >               \n"output_str += " #   <   <   <   <       =               \n"output_str += "-----------------------------------------\n"input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str# # Example usage:
# input_str = "i+i*i"
# output_str = operator_precedence_analysis(input_str)
# print(output_str)

代码解释

定义优先级表和辅助函数
priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']
]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]
初始化输出字符串
output_str = ""output_str += "文法为:\n"
output_str += "(0)E'->#E#\n"
output_str += "(1)E->E+T\n"
output_str += "(2)E->T\n"
output_str += "(3)T->T*F\n"
output_str += "(4)T->F\n"
output_str += "(5)F->(E)\n"
output_str += "(6)F->i\n"
output_str += "-----------------------------------------\n"
output_str += "           算符优先关系表                \n"
output_str += "     +   *   i   (   )   #               \n"
output_str += " +   >   <   <   <   >   >               \n"
output_str += " *   >   >   <   <   >   >               \n"
output_str += " i   >   >           >   >               \n"
output_str += " (   <   <   <   <   =                   \n"
output_str += " )   >   >           >   >               \n"
output_str += " #   <   <   <   <       =               \n"
output_str += "-----------------------------------------\n"
主分析逻辑
input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str

总结

这个算符优先分析器通过定义优先级表和相关的辅助函数,实现了对输入表达式的语法分析。它可以识别输入表达式中的操作符和操作数,并根据优先级表进行匹配和替换,最终输出分析过程和结果。

完整代码

all/analysis_functions/lexical_analysis.py

"""NAME : lexical_analysisUSER : adminDATE : 6/5/2024PROJECT_NAME : sf50CSDN : friklogff
"""# 在 all/analysis_functions/lexical_analysis.py 文件中
# # 在 all/analysis_functions/lexical_analysis.py 文件中
#
# # 在 all/analysis_functions/lexical_analysis.py 文件中
#
# import re
#
# # 令牌代码字典
# TOKENS = {
#     "int": "保留字",
#     "float": "浮点类型",
#     "if": "if关键字",
#     "<": "小于",
#     "<=": "小于等于",
#     "switch": "switch关键字",
#     "==": "等于",
#     "while": "while关键字",
#     "!=": "不等于",
#     "do": "do关键字",
#     ">": "大于",
#     ">=": "大于等于",
#     ";": "分号",
#     "+": "加号",
#     "-": "减号",
#     "*": "乘号",
#     "/": "除号",
#     "=": "赋值"
# }
#
#
# def is_identifier(token):
#     # 使用正则表达式检查是否是标识符
#     return re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', token) is not None
#
#
# def lexical_analysis(input_code):
#     """
#     执行词法分析的简化函数。
#     此函数通过非字母数字字符将输入代码分割为词汇单元(tokens)。
#     """
#     tokens = re.split(r'(\W+)', input_code)
#
#     result_list = []
#     for token in tokens:
#         if token == "int" or token == "float":
#             result_list.append(("保留字", token))
#         elif token == "if" or token == "else" or token == "switch" or token == "while" or token == "do":
#             result_list.append((token + "关键字", token))
#         elif token == "<" or token == "<=" or token == "==" or token == "!=" or token == ">" or token == ">=" or token == "=":
#             result_list.append(("比较运算符", token))
#         elif token == "+" or token == "-" or token == "*" or token == "/":
#             result_list.append(("算术运算符", token))
#         elif token == ";" or token == "{" or token == "}" or token == "(" or token == ")":
#             result_list.append(("界定符", token))
#         elif token.isdigit():
#             result_list.append(("整数常量", token))
#         elif is_identifier(token):
#             result_list.append(("标识符", token))
#
#     return result_listimport redef analyze_token(word, token_map, result):if word in token_map:result.append("(保留字--{},{})".format(token_map[word], word))elif word == ";":result.append("(分号--26,{})".format(word))elif word == "=":result.append("(等号--17,{})".format(word))elif word == "+":result.append("(加号--13,{})".format(word))elif word == "-":result.append("(减号--14,{})".format(word))elif word == "*":result.append("(乘号--15,{})".format(word))elif word == "/":result.append("(除号--16,{})".format(word))elif word == "<":result.append("(小于--20,{})".format(word))elif word == ">":result.append("(大于--24,{})".format(word))elif word == "==":result.append("(等于--22,{})".format(word))elif word == "!=":result.append("(不等于--23,{})".format(word))elif word == "<=":result.append("(小于等于--21,{})".format(word))elif word == ">=":result.append("(大于等于--25,{})".format(word))elif word.isdigit():result.append("(整数--11,{})".format(word))else:if word.isalpha() and word[0].isalpha():result.append("(标识符--10,{})".format(word))def lexical_analysis(input_code):token_map = {"int": 1,"float": 2,"double": 1,"char": 1,"if": 3,"then": 1,"else": 1,"switch": 4,"case": 1,"break": 1,"continue": 1,"while": 5,"do": 6,"for": 1}lines = input_code.splitlines()result = []for line in lines:words = re.findall(r'[A-Za-z_]+|\d+|\S', line)  # Split words based on alphabets, digits, or non-whitespace charactersfor word in words:pos = re.search(r'[;=\+\-\*/<>]', word)  # Find operator symbols in the wordif pos is not None:start = pos.start()if start != 0:analyze_token(word[:start], token_map, result)op = word[start]  # Get operator symbolanalyze_token(op, token_map, result)if start < len(word) - 1:analyze_token(word[start + 1:], token_map, result)else:analyze_token(word, token_map, result)return "Lexical Tokens: " + ", ".join(result)# a="""
# int a = 10;float b = 5.5;if (a < b) {
# cout << "a is less than b" << endl;}else if (a == b){
# cout << "a is equal to b" <<endl;}else {
# cout << "a is greater than b" << endl;)
#
# """
# print(lexical_analysis(a))

all/analysis_functions/ll1_analysis.py

class Type:def __init__(self):self.origin = 'N'  # 产生式左侧字符 大写字符self.array = ""  # 产生式右边字符self.length = 0  # 字符个数def init(self, a, b):self.origin = aself.array = bself.length = len(self.array)def is_terminator(c):# 判断是否是终结符v1 = "i+*()#"return c in v1def init(exp):ridx = 0len_exp = len(exp)rest_stack = list(exp)return ridx, len_exp, rest_stackdef print_stack(analyze_stack, top, ridx, len_exp, rest_stack):output = ''.join(analyze_stack[:top + 1]) + ' ' * (20 - top)for i in range(ridx):output += ' 'for i in range(ridx, len_exp):output += rest_stack[i]output += '\t\t\t'return outputdef analyze(exp):v1 = "i+*()#"  # 终结符v2 = "EGTSF"  # 非终结符e = Type()t = Type()g = Type()g1 = Type()s = Type()s1 = Type()f = Type()f1 = Type()e.init('E', "TG")t.init('T', "FS")g.init('G', "+TG")g1.init('G', "^")s.init('S', "*FS")s1.init('S', "^")f.init('F', "(E)")f1.init('F', "i")C = [[Type() for _ in range(10)] for _ in range(10)]C[0][0] = C[0][3] = eC[1][1] = gC[1][4] = C[1][5] = g1C[2][0] = C[2][3] = tC[3][2] = sC[3][4] = C[3][5] = C[3][1] = s1C[4][0] = f1C[4][3] = fanalyze_stack = [' ' for _ in range(20)]output_str = ""ridx, len_exp, rest_stack = init(exp)top = 0analyze_stack[top] = '#'top += 1analyze_stack[top] = 'E'  # '#','E'进栈output_str += "步骤\t\t分析栈 \t\t\t\t\t剩余字符 \t\t\t\t所用产生式\n"k = 0while True:ch = rest_stack[ridx]x = analyze_stack[top]top -= 1  # x为当前栈顶字符output_str += str(k + 1).ljust(8)if x == '#':output_str += "分析成功!AC!\n"  # 接受return output_strif is_terminator(x):if x == ch:  # 匹配上了output_str += print_stack(analyze_stack, top, ridx, len_exp, rest_stack) + ch + "匹配\n"ridx += 1  # 下一个输入字符else:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误终结符为" + ch + "\n"  # 输出出错终结符return output_strelse:  # 非终结符处理m = v2.find(x) if x in v2 else -1  # 非终结符下标n = v1.find(ch) if ch in v1 else -1  # 终结符下标if m == -1 or n == -1:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误非终结符为" + x + "\n"return output_strelif C[m][n].origin == 'N':  # 无产生式output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,无法找到对应的产生式\n"  # 输出无产生式错误return output_strelse:  # 有产生式length = C[m][n].lengthtemp = C[m][n].arrayoutput_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + x + "->" + temp + "\n"  # 输出所用产生式for i in range(length - 1, -1, -1):if temp[i] != '^':top += 1analyze_stack[top] = temp[i]  # 将右端字符逆序进栈k += 1def ll1_analysis(exp):output_str = analyze(exp)expressions = ["i+i*i#","i*i+i#","i+i*(i+i)#","i#"]return output_str# if __name__ == "__main__":
#     exp = "i+i*i#"
#     output = ll1_analysis(exp)
#     print(output)

all/analysis_functions/lr0_analysis.py


def lr0_analysis(input_string):LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈cmp = ['#']   # 符号栈cod = '0'   # 状态栈对应输出字符串signal = ''   # 符号栈对应输出字符串sti = '#'   # 符号栈对应输出字符串def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return numoutput = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"LR = 0while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output# input_string = "bccd#"
# output = analyze_string(input_string)
# print(output)

all/analysis_functions/operator_precedence_analysis.py

"""NAME : operator_precedence_analysisUSER : adminDATE : 6/5/2024PROJECT_NAME : sf50CSDN : friklogff
"""# 在 all/analysis_functions/operator_precedence_analysis.py 文件中# def operator_precedence_analysis(grammar, sentence):
#     """
#     A simplified function to mock Operator Precedence analysis.
#     """
#     return f"Operator Precedence Analysis Result: Grammar - {grammar}, Sentence - {sentence}"
def operator_precedence_analysis(input_str):priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]output_str = ""output_str += "文法为:\n"output_str += "(0)E'->#E#\n"output_str += "(1)E->E+T\n"output_str += "(2)E->T\n"output_str += "(3)T->T*F\n"output_str += "(4)T->F\n"output_str += "(5)F->(E)\n"output_str += "(6)F->i\n"output_str += "-----------------------------------------\n"output_str += "           算符优先关系表                \n"output_str += "     +   *   i   (   )   #               \n"output_str += " +   >   <   <   <   >   >               \n"output_str += " *   >   >   <   <   >   >               \n"output_str += " i   >   >           >   >               \n"output_str += " (   <   <   <   <   =                   \n"output_str += " )   >   >           >   >               \n"output_str += " #   <   <   <   <       =               \n"output_str += "-----------------------------------------\n"input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str# # Example usage:
# input_str = "i+i*i"
# output_str = operator_precedence_analysis(input_str)
# print(output_str)

all/gui.py

# -*- coding: utf-8 -*-
import gradio as grfrom all.analysis_functions.lexical_analysis import lexical_analysis
from all.analysis_functions.ll1_analysis import ll1_analysis
from all.analysis_functions.lr0_analysis import lr0_analysis
from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis# Define your input and output components for the Gradio interface
with gr.Blocks() as demo:# 词法分析标签with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["""int a = 10;float b = 5.5;if (a < b) {cout << "a is less than b" << endl;}else if (a == b){cout << "a is equal to b" <<endl;}else {cout << "a is greater than b" << endl;)"""],["""int x = 5; int y = 10; int z = x + y;"""],["""for (int i = 0; i < 10; i++) {cout << i << endl;}"""],["""int a = 5;int b = 10;int c = a * b;cout << c << endl;"""],],inputs=lex_input)# LL1语法分析标签with gr.Tab("LL1语法分析"):# ll1_grammar_input = gr.components.Textbox(label="请输入文法")ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")# ll1_button.click(ll1_analysis, inputs=[ll1_grammar_input, ll1_sentence_input], outputs=ll1_output)ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)# 算符优先语法分析标签with gr.Tab("算符优先语法分析"):# op_grammar_input = gr.components.Textbox(label="请输入文法")op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")# op_button.click(operator_precedence_analysis, inputs=[op_grammar_input, op_sentence_input], outputs=op_output)op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)# LR0语法分析标签with gr.Tab("LR0语法分析"):# lr0_grammar_input = gr.components.Textbox(label="请输入文法")lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")# lr0_button.click(lr0_analysis, inputs=[lr0_grammar_input, lr0_sentence_input], outputs=lr0_output)lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
demo.launch(share=True)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/17617.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

HR人才测评,什么是亲和力?如何提高亲和力?

什么是亲和力&#xff1f; 无论是熟人还是陌生人&#xff0c;在沟通之时&#xff0c;一些特定的人群总是给人一种非常融洽的感觉&#xff0c;让人在与其聊天之时没有任何的防备&#xff0c;可以畅所欲言&#xff0c;而这样的人就是具备亲和力的人。 在职场之中&#xff0c;…

常见的MySQL语句类型及其基础用法

MySQL语句主要用于在MySQL数据库管理系统中执行各种操作&#xff0c;包括数据的检索、插入、更新、删除以及数据库结构的管理。下面是一些常见的MySQL语句类型及其基础用法详解&#xff1a; 1. SELECT 语句 - 查询数据 最基本的数据检索语句&#xff0c;用于从数据库中选取数…

uniapp 解决华为上架被拒问题,APP在申请敏感权限时,应同步说明权限申请的使用目的

1、store/modules/permission.js // app权限申请处理 const state {// 处理应用程序权限请求CAMERA: false,WRITE_EXTERNAL_STORAGE: false,ACCESS_FINE_LOCATION: false,CALL_PHONE: false,isIos: uni.getSystemInfoSync().platform ios,mapping: {CAMERA: {title: 摄像头权…

Pytorch入门需要达到的效果

会搭建深度学习环境和依赖包安装 使用Anaconda创建环境、在pytorch官网安装pytorch、安装依赖包 会使用常见操作&#xff0c;例如matmul&#xff0c;sigmoid&#xff0c;softmax&#xff0c;relu&#xff0c;linear matmul操作见文章torch.matmul()的用法 sigmoid&#xff0…

Java多线程(02)

一、如何终止线程 终止线程就是要让 run 方法尽快执行结束 1. 手动创建标志位 可以通过在代码中手动创建标志位的方式&#xff0c;来作为 run 方法的执行结束条件&#xff1b; public static void main(String[] args) throws InterruptedException {boolean flag true;Thr…

Flutter 中的 CupertinoTabBar 小部件:全面指南

Flutter 中的 CupertinoTabBar 小部件&#xff1a;全面指南 在 Flutter 的 Cupertino 组件库中&#xff0c;CupertinoTabBar 是一个用于创建 iOS 风格底部导航栏的 widget。它为用户提供了一个直观的界面&#xff0c;可以快速在不同的标签页之间切换。本文将详细介绍 Cupertin…

MySQL分库分表:原理、实现与优化

推荐一个程序员的常用工具网站&#xff0c;嘎嘎好用&#xff1a;程序员常用工具 云服务器限时免费领&#xff1a;轻量服务器2核4G MySQL分库分表&#xff1a;原理、实现与优化 在现代互联网应用中&#xff0c;随着数据量的迅速增长和访问量的激增&#xff0c;单个数据库的性…

基于AT89C52单片机的智能窗帘系统

点击链接获取Keil源码与Project Backups仿真图&#xff1a; https://download.csdn.net/download/qq_64505944/89276984?spm1001.2014.3001.5503 C 源码仿真图毕业设计实物制作步骤07 智能窗户控制系统学院&#xff08;部&#xff09;&#xff1a; 专 业&#xff1a; 班 级&…

双指针法和链表练习题(2024/5/28)

1面试题 02.07. 链表相交 给你两个单链表的头节点 headA 和 headB &#xff0c;请你找出并返回两个单链表相交的起始节点。如果两个链表没有交点&#xff0c;返回 null 。 图示两个链表在节点 c1 开始相交&#xff1a; 题目数据 保证 整个链式结构中不存在环。 注意&#xf…

系统管理、磁盘分区

系统管理 业务层面&#xff1a;为了满足一定的需求所做的特定操作。 硬盘是什么&#xff0c;硬盘的作用&#xff1a; **硬盘&#xff1a;**计算机的存储设备&#xff0c;机械硬盘是由一个或者多个磁性的盘组成&#xff0c;可以在盘片上进行数据的读写。 连接方式&#xff1a…

【Rust日报】Rust 中的形式验证

文章 - 未来的愿景&#xff1a;Rust 中的形式验证 这篇文章回顾了形式化验证的基本概念&#xff0c;作者展示了如何使用 Hoare triples 来描述和推理程序的正确性&#xff0c;以及如何使用分离逻辑来解决验证的复杂性。文章还解释了为什么 Rust 适用于形式化验证&#xff0c;以…

go ast语义分析实现指标计算器

什么是AST 首先我们要知道AST是什么&#xff08;Abstract Syntax Tree&#xff0c;AST&#xff09;&#xff0c;简称为语法树&#xff0c;是go语言源代码语法结构的一种抽象表示。它以树状的形式表现编程语言的语法结构&#xff0c;树上的每个节点都表示源代码中的一种结构。 …

SCSS基本使用:解锁CSS预处理器的高效与优雅

SCSS基本使用&#xff1a;解锁CSS预处理器的高效与优雅 一、SCSS初探&#xff1a;从CSS到预处理的飞跃1.1 SCSS基础概念1.2 安装与使用安装Sass编译SCSS 二、SCSS核心特性与实践2.1 变量2.2 嵌套2.3 混合&#xff08;Mixins&#xff09;2.4 继承2.5 运算 三、实战技巧与最佳实践…

Python怎么得到 xxx/xxx/xxx/abc.bag中的abc.bag?

在Python中&#xff0c;从一个完整的文件路径中提取文件名&#xff08;如abc.bag&#xff09;&#xff0c;可以使用os.path模块中的basename函数。下面是一个例子&#xff1a; python import os # 假设这是你的完整文件路径 full_path "xxx/xxx/xxx/abc.bag" # 使…

我的心情JSP+Servlet+JDBC+MySQL

系统概述 本系统采用JSPServletJDBCMySQL技术进行开发&#xff0c;包括查看我的心情列表&#xff0c; 编辑我的心情信息、新增我的心情。使用方法 将项目从idea中导入&#xff0c;然后配置项目的结构&#xff0c;包括jdk,库&#xff0c;模块&#xff0c;项目&#xff0c;工件…

基于低代码的数智化融通研究

低代码平台简介 在信息化时代的浪潮中&#xff0c;软件应用开发扮演着至关重要的角色。然而&#xff0c;传统的软件开发方式往往需要开发人员具备深厚的编程基础和丰富的经验&#xff0c;这使得应用开发的门槛较高&#xff0c;开发周期较长&#xff0c;效率相对较低。为了解决这…

QT 自定义协议TCP传输文件

后面附带实例的下载地址 一、将文件看做是由:文件头+文件内容组成,其中文件头包含文件的一些信息:文件名称、文件大小等。 二、文件头单独发送,文件内容切块发送。 三、每次发送信息格式:发送内容大小、发送内容类型(文件头或是文件块内容)、文件块内容。 四、效果展…

基于springboot实现政府管理系统项目【项目源码+论文说明】

基于springboot实现政府管理系统演示 摘要 信息数据从传统到当代&#xff0c;是一直在变革当中&#xff0c;突如其来的互联网让传统的信息管理看到了革命性的曙光&#xff0c;因为传统信息管理从时效性&#xff0c;还是安全性&#xff0c;还是可操作性等各个方面来讲&#xff…

饲料粉碎混合机组:打造精细化养殖

饲料粉碎混合机组是畜牧业和养殖业中不可或缺的设备。它集饲料粉碎和混合于一体&#xff0c;可以高效地处理各种饲料原料&#xff0c;提高饲料的均匀度和营养价值。 具体来说&#xff0c;饲料粉碎混合机组的主要功能包括将饲料原料进行粉碎&#xff0c;增加其表面积和调质粒度…

Steam游戏搬砖:靠谱吗,详细版说下搬砖中的核心内容!

可能大家也比较关注国外Steam游戏搬砖这个项目&#xff0c;最近单独找我了解的也比较多&#xff0c;其实也正常&#xff0c;因为现在市面上的项目很多都很鸡肋&#xff0c;而且很多都是一片红海&#xff0c;内卷太过严重&#xff0c;所以对于Steam的关注度也高很多&#xff0c;…