愿你 发表于 2020-3-20 11:39:20

python文件引入同级文件出现错误

attempted relative import with no known parent package

qiuyouzhi 发表于 2020-3-20 11:42:39

请先学会提问题

愿你 发表于 2020-3-20 12:24:21

qiuyouzhi 发表于 2020-3-20 11:42
请先学会提问题

刚刚一直没法上传图片...

lixiangyv 发表于 2020-3-20 13:12:22

没法上传图片那就把源代码和运行结果发过来呀

愿你 发表于 2020-3-20 14:53:08

lixiangyv 发表于 2020-3-20 13:12
没法上传图片那就把源代码和运行结果发过来呀

import cv2
from .lpr import LPRLite as pr
import numpy as np

def recognizeOneImage(src):
    # grr = cv2.imread("image/(2).jpg")
    grr = cv2.imread(src)
    model = pr.LPR("model/cascade.xml", "model/model12.h5", "model/ocr_plate_all_gru.h5")
    for pstr, confidence, rect in model.SimpleRecognizePlateByE2E(grr):
      if confidence > 0.7:
            image = drawRectBox(grr, rect, pstr + " " + str(round(confidence, 3)))
            print("plate_str:")
            print(pstr)
            print("plate_confidence")
            print(confidence)

    cv2.imwrite('images_rec1/' + 'image_new.jpg', image)
    return pstr


str = recognizeOneImage('image/(2).jpg')
print(str)
print("00000000000000000")


出现错误D:\Anaconda\envs\tensorflow\python.exe D:/pycharm/djangocode/projectt/project4/myApp/python_LPR/demo.py
Traceback (most recent call last):
File "D:/pycharm/djangocode/projectt/project4/myApp/python_LPR/demo.py", line 63, in <module>
    from .lpr import LPRLite as pr
ImportError: attempted relative import with no known parent package

lixiangyv 发表于 2020-3-20 17:21:36

导入模块时 . 表示导入这个包的某一个模块,
因为你就这一个代码,不能算是包。
要想变成包,应该在这个代码所在的文件夹中,新建一个 __init__.py 文件。
并且则这个文件夹里得有 lpr.py 这个模块。

或者你可以把 from .lpr import LPRLite as pr 中的 . 去掉,可以没有 __init__.py 文件,
因为 Python 导入模块会现在当前文件夹里寻找有没有模块。

愿你 发表于 2020-3-21 12:23:51

lixiangyv 发表于 2020-3-20 17:21
导入模块时 . 表示导入这个包的某一个模块,
因为你就这一个代码,不能算是包。
要想变成包,应该在这个 ...

getit!谢谢大佬嘻嘻嘻
那啥..我再问你一个问题呗,看看您会不会
有遇到类似这样的错误嘛
File "D:\Anaconda\lib\site-packages\h5py\_hl\files.py", line 92, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (C:\Minonda\conda-bld\h5py_1482647201869\work\h5py\_ob
jects.c:2866)
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (C:\Minonda\conda-bld\h5py_1482647201869\work\h5py\_ob
jects.c:2824)
File "h5py\h5f.pyx", line 76, in h5py.h5f.open (C:\Minonda\conda-bld\h5py_1482647201869\work\h5py\h5f.c:2112)
OSError: Unable to open file (Unable to open file: name = 'model/model12.h5', errno = 2, error message = 'no such file or dire
ctory', flags = 0, o_flags = 0)
"POST /upload_

lixiangyv 发表于 2020-3-29 09:33:11

愿你 发表于 2020-3-21 12:23
getit!谢谢大佬嘻嘻嘻
那啥..我再问你一个问题呗,看看您会不会
有遇到类似这样的错误嘛

能把源码发过来吗

愿你 发表于 2020-4-1 10:17:24

lixiangyv 发表于 2020-3-29 09:33
能把源码发过来吗

源码用啥发阿 代码格式那样吗

lixiangyv 发表于 2020-4-2 10:49:44

用 <> 这个发

愿你 发表于 2020-4-2 13:55:50

lixiangyv 发表于 2020-4-2 10:49
用这个发

首先,我要实现的小程序是从微信小程序端上传一张车牌图片到后端(这里的后端用django搭建),django后台要额外调用另一个python文件(该文件能够识别出车牌),这里的调用使用rpc远程调用。
django后台相关view代码:
def upload_handle(request):
    print("====")
    user=request.POST.get("nickName")
    gender = request.POST.get("gender")
    avatarUrl = request.POST.get("avatarUrl")
    user_obj=UserList.objects.create(user=user,gender=gender,avatarUrl=avatarUrl)
    user_obj.save()

    card_imgs=request.FILES.get('file')
    card_obj = CardList.objects.create(card_img=card_imgs)
    card_obj.save()
    current_dir = os.getcwd()
    path = current_dir + card_obj.card_img.url
    card_obj.card_address = str(path)
    card_obj.save()
    print("111")
    print(path)
    server = ServerProxy("http://localhost:8888") # 初始化服务器
    print(server.get_platestr(path))
    platestr=server.get_platestr(path)
    print("222")

    return HttpResponse(platestr)
额外调用的python文件代码:import cv2
import os
import sys
import numpy as np
import tensorflow as tf

car_plate_w, car_plate_h = 136, 36
char_w, char_h = 20, 20
char_table = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K',
            'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '川', '鄂', '赣', '甘', '贵',
            '桂', '黑', '沪', '冀', '津', '京', '吉', '辽', '鲁', '蒙', '闽', '宁', '青', '琼', '陕', '苏', '晋',
            '皖', '湘', '新', '豫', '渝', '粤', '云', '藏', '浙']


def hist_image(img):
    assert img.ndim == 2
    hist =
    img_h, img_w = img.shape, img.shape

    for row in range(img_h):
      for col in range(img_w):
            hist] += 1
    p = / (img_w * img_h) for n in range(256)]
    p1 = np.cumsum(p)
    for row in range(img_h):
      for col in range(img_w):
            v = img
            img = p1 * 255
    return img


def find_board_area(img):
    assert img.ndim == 2
    img_h, img_w = img.shape, img.shape
    top, bottom, left, right = 0, img_h, 0, img_w
    flag = False
    h_proj =
    v_proj =

    for row in range(round(img_h * 0.5), round(img_h * 0.8), 3):
      for col in range(img_w):
            if img == 255:
                h_proj += 1
      if flag == False and h_proj > 12:
            flag = True
            top = row
      if flag == True and row > top + 8 and h_proj < 12:
            bottom = row
            flag = False

    for col in range(round(img_w * 0.3), img_w, 1):
      for row in range(top, bottom, 1):
            if img == 255:
                v_proj += 1
      if flag == False and (v_proj > 10 or v_proj - v_proj > 5):
            left = col
            break
    return left, top, 120, bottom - top - 10


def verify_scale(rotate_rect):
    error = 0.4
    aspect = 4# 4.7272
    min_area = 10 * (10 * aspect)
    max_area = 150 * (150 * aspect)
    min_aspect = aspect * (1 - error)
    max_aspect = aspect * (1 + error)
    theta = 30

    # 宽或高为0,不满足矩形直接返回False
    if rotate_rect == 0 or rotate_rect == 0:
      return False

    r = rotate_rect / rotate_rect
    r = max(r, 1 / r)
    area = rotate_rect * rotate_rect
    if area > min_area and area < max_area and r > min_aspect and r < max_aspect:
      # 矩形的倾斜角度在不超过theta
      if ((rotate_rect < rotate_rect and rotate_rect >= -90 and rotate_rect < -(90 - theta)) or
                (rotate_rect < rotate_rect and rotate_rect > -theta and rotate_rect <= 0)):
            return True
    return False


def img_Transform(car_rect, image):
    img_h, img_w = image.shape[:2]
    rect_w, rect_h = car_rect, car_rect
    angle = car_rect

    return_flag = False
    if car_rect == 0:
      return_flag = True
    if car_rect == -90 and rect_w < rect_h:
      rect_w, rect_h = rect_h, rect_w
      return_flag = True
    if return_flag:
      car_img = image - rect_h / 2):int(car_rect + rect_h / 2),
                  int(car_rect - rect_w / 2):int(car_rect + rect_w / 2)]
      return car_img

    car_rect = (car_rect, (rect_w, rect_h), angle)
    box = cv2.boxPoints(car_rect)

    heigth_point = right_point =
    left_point = low_point = , car_rect]
    for point in box:
      if left_point > point:
            left_point = point
      if low_point > point:
            low_point = point
      if heigth_point < point:
            heigth_point = point
      if right_point < point:
            right_point = point

    if left_point <= right_point:# 正角度
      new_right_point = , heigth_point]
      pts1 = np.float32()
      pts2 = np.float32()# 字符只是高度需要改变
      M = cv2.getAffineTransform(pts1, pts2)
      dst = cv2.warpAffine(image, M, (round(img_w * 2), round(img_h * 2)))
      car_img = dst):int(heigth_point), int(left_point):int(new_right_point)]

    elif left_point > right_point:# 负角度
      new_left_point = , heigth_point]
      pts1 = np.float32()
      pts2 = np.float32()# 字符只是高度需要改变
      M = cv2.getAffineTransform(pts1, pts2)
      dst = cv2.warpAffine(image, M, (round(img_w * 2), round(img_h * 2)))
      car_img = dst):int(heigth_point), int(new_left_point):int(right_point)]

    return car_img


def pre_process(orig_img):
    gray_img = cv2.cvtColor(orig_img, cv2.COLOR_BGR2GRAY)
    # cv2.imshow('gray_img', gray_img)
    # cv2.waitKey(0)

    blur_img = cv2.blur(gray_img, (3, 3))
    # cv2.imshow('blur', blur_img)
    # cv2.waitKey(0)

    sobel_img = cv2.Sobel(blur_img, cv2.CV_16S, 1, 0, ksize=3)
    sobel_img = cv2.convertScaleAbs(sobel_img)
    # cv2.imshow('sobel', sobel_img)
    # cv2.waitKey(0)

    hsv_img = cv2.cvtColor(orig_img, cv2.COLOR_BGR2HSV)
    # cv2.imshow('hsv', hsv_img)
    # cv2.waitKey(0)

    h, s, v = hsv_img[:, :, 0], hsv_img[:, :, 1], hsv_img[:, :, 2]
    # 黄色色调区间,蓝色色调区间:
    blue_img = (((h > 26) & (h < 34)) | ((h > 100) & (h < 124))) & (s > 70) & (v > 70)
    blue_img = blue_img.astype('float32')
    # cv2.imshow('blue', blue_img)
    # cv2.waitKey(0)

    mix_img = np.multiply(sobel_img, blue_img)
    # cv2.imshow('mix', mix_img)
    # cv2.waitKey(0)

    mix_img = mix_img.astype(np.uint8)

    ret, binary_img = cv2.threshold(mix_img, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    # cv2.imshow('binary',binary_img)
    # cv2.waitKey(0)

    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 5))
    close_img = cv2.morphologyEx(binary_img, cv2.MORPH_CLOSE, kernel)
    # cv2.imshow('close', close_img)
    # cv2.waitKey(0)

    return close_img


# 给候选车牌区域做漫水填充算法,一方面补全上一步求轮廓可能存在轮廓歪曲的问题,
# 另一方面也可以将非车牌区排除掉
def verify_color(rotate_rect, src_image):
    img_h, img_w = src_image.shape[:2]
    mask = np.zeros(shape=, dtype=np.uint8)
    connectivity = 4# 种子点上下左右4邻域与种子颜色值在的被涂成new_value,也可设置8邻域
    loDiff, upDiff = 30, 30
    new_value = 255
    flags = connectivity
    flags |= cv2.FLOODFILL_FIXED_RANGE# 考虑当前像素与种子象素之间的差,不设置的话则和邻域像素比较
    flags |= new_value << 8
    flags |= cv2.FLOODFILL_MASK_ONLY# 设置这个标识符则不会去填充改变原始图像,而是去填充掩模图像(mask)

    rand_seed_num = 5000# 生成多个随机种子
    valid_seed_num = 200# 从rand_seed_num中随机挑选valid_seed_num个有效种子
    adjust_param = 0.1
    box_points = cv2.boxPoints(rotate_rect)
    box_points_x = for n in box_points]
    box_points_x.sort(reverse=False)
    adjust_x = int((box_points_x - box_points_x) * adjust_param)
    col_range = + adjust_x, box_points_x - adjust_x]
    box_points_y = for n in box_points]
    box_points_y.sort(reverse=False)
    adjust_y = int((box_points_y - box_points_y) * adjust_param)
    row_range = + adjust_y, box_points_y - adjust_y]
    # 如果以上方法种子点在水平或垂直方向可移动的范围很小,则采用旋转矩阵对角线来设置随机种子点
    if (col_range - col_range) / (box_points_x - box_points_x) < 0.4 \
            or (row_range - row_range) / (box_points_y - box_points_y) < 0.4:
      points_row = []
      points_col = []
      for i in range(2):
            pt1, pt2 = box_points, box_points
            x_adjust, y_adjust = int(adjust_param * (abs(pt1 - pt2))), int(adjust_param * (abs(pt1 - pt2)))
            if (pt1 <= pt2):
                pt1, pt2 = pt1 + x_adjust, pt2 - x_adjust
            else:
                pt1, pt2 = pt1 - x_adjust, pt2 + x_adjust
            if (pt1 <= pt2):
                pt1, pt2 = pt1 + adjust_y, pt2 - adjust_y
            else:
                pt1, pt2 = pt1 - y_adjust, pt2 + y_adjust
            temp_list_x = , pt2, int(rand_seed_num / 2))]
            temp_list_y = , pt2, int(rand_seed_num / 2))]
            points_col.extend(temp_list_x)
            points_row.extend(temp_list_y)
    else:
      points_row = np.random.randint(row_range, row_range, size=rand_seed_num)
      points_col = np.linspace(col_range, col_range, num=rand_seed_num).astype(np.int)

    points_row = np.array(points_row)
    points_col = np.array(points_col)
    hsv_img = cv2.cvtColor(src_image, cv2.COLOR_BGR2HSV)
    h, s, v = hsv_img[:, :, 0], hsv_img[:, :, 1], hsv_img[:, :, 2]
    # 将随机生成的多个种子依次做漫水填充,理想情况是整个车牌被填充
    flood_img = src_image.copy()
    seed_cnt = 0
    for i in range(rand_seed_num):
      rand_index = np.random.choice(rand_seed_num, 1, replace=False)
      row, col = points_row, points_col
      # 限制随机种子必须是车牌背景色
      if (((h > 26) & (h < 34)) | ((h > 100) & (h < 124))) & (
                s > 70) & (v > 70):
            cv2.floodFill(src_image, mask, (col, row), (255, 255, 255), (loDiff,) * 3, (upDiff,) * 3, flags)
            cv2.circle(flood_img, center=(col, row), radius=2, color=(0, 0, 255), thickness=2)
            seed_cnt += 1
            if seed_cnt >= valid_seed_num:
                break
    # ======================调试用======================#
    show_seed = np.random.uniform(1, 100, 1).astype(np.uint16)
    cv2.imshow('floodfill' + str(show_seed), flood_img)
    cv2.imshow('flood_mask' + str(show_seed), mask)
    # ======================调试用======================#
    # 获取掩模上被填充点的像素点,并求点集的最小外接矩形
    mask_points = []
    for row in range(1, img_h + 1):
      for col in range(1, img_w + 1):
            if mask != 0:
                mask_points.append((col - 1, row - 1))
    mask_rotateRect = cv2.minAreaRect(np.array(mask_points))
    if verify_scale(mask_rotateRect):
      return True, mask_rotateRect
    else:
      return False, mask_rotateRect


# 车牌定位
def locate_carPlate(orig_img, pred_image):
    carPlate_list = []
    temp1_orig_img = orig_img.copy()# 调试用
    temp2_orig_img = orig_img.copy()# 调试用
    contours, heriachy = cv2.findContours(pred_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    for i, contour in enumerate(contours):
      cv2.drawContours(temp1_orig_img, contours, i, (0, 255, 255), 2)
      # 获取轮廓最小外接矩形,返回值rotate_rect
      rotate_rect = cv2.minAreaRect(contour)
      # 根据矩形面积大小和长宽比判断是否是车牌
      if verify_scale(rotate_rect):
            ret, rotate_rect2 = verify_color(rotate_rect, temp2_orig_img)
            if ret == False:
                continue
            # 车牌位置矫正
            car_plate = img_Transform(rotate_rect2, temp2_orig_img)
            car_plate = cv2.resize(car_plate, (car_plate_w, car_plate_h))# 调整尺寸为后面CNN车牌识别做准备
            # ========================调试看效果========================#
            box = cv2.boxPoints(rotate_rect2)
            for k in range(4):
                n1, n2 = k % 4, (k + 1) % 4
                cv2.line(temp1_orig_img, (box, box), (box, box), (255, 0, 0), 2)
            cv2.imshow('opencv_' + str(i), car_plate)
            # ========================调试看效果========================#
            carPlate_list.append(car_plate)

    cv2.imshow('contour', temp1_orig_img)
    return carPlate_list


# 左右切割
def horizontal_cut_chars(plate):
    char_addr_list = []
    area_left, area_right, char_left, char_right = 0, 0, 0, 0
    img_w = plate.shape

    # 获取车牌每列边缘像素点个数
    def getColSum(img, col):
      sum = 0
      for i in range(img.shape):
            sum += round(img / 255)
      return sum;

    sum = 0
    for col in range(img_w):
      sum += getColSum(plate, col)
    # 每列边缘像素点必须超过均值的60%才能判断属于字符区域
    col_limit = 0# round(0.5*sum/img_w)
    # 每个字符宽度也进行限制
    charWid_limit =
    is_char_flag = False

    for i in range(img_w):
      colValue = getColSum(plate, i)
      if colValue > col_limit:
            if is_char_flag == False:
                area_right = round((i + char_right) / 2)
                area_width = area_right - area_left
                char_width = char_right - char_left
                if (area_width > charWid_limit) and (area_width < charWid_limit):
                  char_addr_list.append((area_left, area_right, char_width))
                char_left = i
                area_left = round((char_left + char_right) / 2)
                is_char_flag = True
      else:
            if is_char_flag == True:
                char_right = i - 1
                is_char_flag = False
    # 手动结束最后未完成的字符分割
    if area_right < char_left:
      area_right, char_right = img_w, img_w
      area_width = area_right - area_left
      char_width = char_right - char_left
      if (area_width > charWid_limit) and (area_width < charWid_limit):
            char_addr_list.append((area_left, area_right, char_width))
    return char_addr_list


def get_chars(car_plate):
    img_h, img_w = car_plate.shape[:2]
    h_proj_list = []# 水平投影长度列表
    h_temp_len, v_temp_len = 0, 0
    h_startIndex, h_end_index = 0, 0# 水平投影记索引
    h_proj_limit = # 车牌在水平方向得轮廓长度少于20%或多余80%过滤掉
    char_imgs = []

    # 将二值化的车牌水平投影到Y轴,计算投影后的连续长度,连续投影长度可能不止一段
    h_count =
    for row in range(img_h):
      temp_cnt = 0
      for col in range(img_w):
            if car_plate == 255:
                temp_cnt += 1
      h_count = temp_cnt
      if temp_cnt / img_w < h_proj_limit or temp_cnt / img_w > h_proj_limit:
            if h_temp_len != 0:
                h_end_index = row - 1
                h_proj_list.append((h_startIndex, h_end_index))
                h_temp_len = 0
            continue
      if temp_cnt > 0:
            if h_temp_len == 0:
                h_startIndex = row
                h_temp_len = 1
            else:
                h_temp_len += 1
      else:
            if h_temp_len > 0:
                h_end_index = row - 1
                h_proj_list.append((h_startIndex, h_end_index))
                h_temp_len = 0

    # 手动结束最后得水平投影长度累加
    if h_temp_len != 0:
      h_end_index = img_h - 1
      h_proj_list.append((h_startIndex, h_end_index))
    # 选出最长的投影,该投影长度占整个截取车牌高度的比值必须大于0.5
    h_maxIndex, h_maxHeight = 0, 0
    for i, (start, end) in enumerate(h_proj_list):
      if h_maxHeight < (end - start):
            h_maxHeight = (end - start)
            h_maxIndex = i
    if h_maxHeight / img_h < 0.5:
      return char_imgs
    chars_top, chars_bottom = h_proj_list, h_proj_list

    plates = car_plate
    cv2.imwrite('carIdentityData/opencv_output/car.jpg', car_plate)
    cv2.imwrite('carIdentityData/opencv_output/plate.jpg', plates)
    char_addr_list = horizontal_cut_chars(plates)

    for i, addr in enumerate(char_addr_list):
      char_img = car_plate:addr]
      char_img = cv2.resize(char_img, (char_w, char_h))
      char_imgs.append(char_img)
    return char_imgs


def extract_char(car_plate):
    gray_plate = cv2.cvtColor(car_plate, cv2.COLOR_BGR2GRAY)
    ret, binary_plate = cv2.threshold(gray_plate, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    char_img_list = get_chars(binary_plate)
    return char_img_list


def cnn_select_carPlate(plate_list, model_path):
    if len(plate_list) == 0:
      return False, plate_list
    g1 = tf.Graph()
    sess1 = tf.Session(graph=g1)
    with sess1.as_default():
      with sess1.graph.as_default():
            model_dir = os.path.dirname(model_path)
            saver = tf.train.import_meta_graph(model_path)
            saver.restore(sess1, tf.train.latest_checkpoint(model_dir))
            graph = tf.get_default_graph()
            net1_x_place = graph.get_tensor_by_name('x_place:0')
            net1_keep_place = graph.get_tensor_by_name('keep_place:0')
            net1_out = graph.get_tensor_by_name('out_put:0')

            input_x = np.array(plate_list)
            net_outs = tf.nn.softmax(net1_out)
            preds = tf.argmax(net_outs, 1)# 预测结果
            probs = tf.reduce_max(net_outs, reduction_indices=)# 结果概率值
            pred_list, prob_list = sess1.run(, feed_dict={net1_x_place: input_x, net1_keep_place: 1.0})
            # 选出概率最大的车牌
            result_index, result_prob = -1, 0.
            for i, pred in enumerate(pred_list):
                if pred == 1 and prob_list > result_prob:
                  result_index, result_prob = i, prob_list
            if result_index == -1:
                return False, plate_list
            else:
                return True, plate_list


def cnn_recongnize_char(img_list, model_path):
    g2 = tf.Graph()
    sess2 = tf.Session(graph=g2)
    text_list = []
    pro_list = []

    if len(img_list) == 0:
      return text_list
    with sess2.as_default():
      with sess2.graph.as_default():
            model_dir = os.path.dirname(model_path)
            saver = tf.train.import_meta_graph(model_path)
            saver.restore(sess2, tf.train.latest_checkpoint(model_dir))
            graph = tf.get_default_graph()
            net2_x_place = graph.get_tensor_by_name('x_place:0')
            net2_keep_place = graph.get_tensor_by_name('keep_place:0')
            net2_out = graph.get_tensor_by_name('out_put:0')

            data = np.array(img_list)
            # 数字、字母、汉字,从67维向量找到概率最大的作为预测结果
            net_out = tf.nn.softmax(net2_out)
            preds = tf.argmax(net_out, 1)
            probs = tf.reduce_max(net_out, reduction_indices=)# 结果概率值
            my_preds, my_probs = sess2.run(, feed_dict={net2_x_place: data, net2_keep_place: 1.0})
            # print(my_preds)
            print(my_probs)
            for i in my_preds:
                text_list.append(char_table)
            prob = 0
            for i in my_probs:
                prob = prob + i
            prob = prob / len(my_probs)
            return text_list, prob


# if __name__ == '__main__':
#   cur_dir = sys.path
#   car_plate_w,car_plate_h = 136,36
#   char_w,char_h = 20,20
#   plate_model_path = os.path.join(cur_dir, './carIdentityData/model/plate_recongnize/model.ckpt-510.meta')
#   char_model_path = os.path.join(cur_dir,'./carIdentityData/model/char_recongnize/model.ckpt-520.meta')
#   img = cv2.imread('carIdentityData/images/43.jpg')
#
#   # 预处理
#   pred_img = pre_process(img)
#   # cv2.imshow('pred_img', pred_img)
#   # cv2.waitKey(0)
#
#   # 车牌定位
#   car_plate_list = locate_carPlate(img,pred_img)
#
#   # CNN车牌过滤
#   ret,car_plate = cnn_select_carPlate(car_plate_list,plate_model_path)
#   if ret == False:
#         print("未检测到车牌")
#         sys.exit(-1)
#   # cv2.imshow('cnn_plate',car_plate)
#   # cv2.waitKey(0)
#
#   # 字符提取
#   char_img_list = extract_char(car_plate)
#   print(len(char_img_list))
#   # CNN字符识别
#   text,pro = cnn_recongnize_char(char_img_list,char_model_path)
#   print(text)
#   print(pro)
def recognizePlatestr(src):
    cur_dir = sys.path
    car_plate_w, car_plate_h = 136, 36
    char_w, char_h = 20, 20
    plate_model_path = os.path.join(cur_dir, './carIdentityData/model/plate_recongnize/model.ckpt-510.meta')
    char_model_path = os.path.join(cur_dir, './carIdentityData/model/char_recongnize/model.ckpt-520.meta')
    # img = cv2.imread('./carIdentityData/images/32.jpg')
    img = cv2.imread(src)

    # 预处理
    pred_img = pre_process(img)
    # cv2.imshow('pred_img', pred_img)
    # cv2.waitKey(0)

    # 车牌定位
    car_plate_list = locate_carPlate(img, pred_img)

    # CNN车牌过滤
    ret, car_plate = cnn_select_carPlate(car_plate_list, plate_model_path)
    if ret == False:
      print("未检测到车牌")
      sys.exit(-1)
    # 字符提取
    char_img_list = extract_char(car_plate)
    print(len(char_img_list))
    # CNN字符识别
    text, confidence = cnn_recongnize_char(char_img_list, char_model_path)
    confidence = str(round(confidence, 3))
    str1 = ''.join(text)
    laststr = str1 + "#" + confidence
    return laststr


from xmlrpc.server import SimpleXMLRPCServer

server = SimpleXMLRPCServer(('localhost', 8888))# 初始化
server.register_function(recognizePlatestr, "get_platestr")# 注册函数
print("Listening for Client")
server.serve_forever()# 保持等待调用状态

# str = recognizePlatestr('./carIdentityData/images/23.jpg')
# print(str)




(carPlateIdentity.py不添加rpc server代码时,直接右击run是能够得出结果。)

愿你 发表于 2020-4-2 13:56:47

愿你 发表于 2020-4-2 13:55
首先,我要实现的小程序是从微信小程序端上传一张车牌图片到后端(这里的后端用django搭建),django后台 ...

但是现在控制台出现错误====
111
D:\pycharm\djangocode\project6/media/media/wx09b36be785d8846c.o6zAJs2gF51BlvNkRFJvDsCl65Ws.CPvzAqHuWWDX9395e6d37e306b59ac9b49_
dsxXny6.jpg
Internal Server Error: /upload_handle/
Traceback (most recent call last):
File "D:\Anaconda\envs\python363\lib\site-packages\django\core\handlers\exception.py", line 41, in inner
    response = get_response(request)
File "D:\Anaconda\envs\python363\lib\site-packages\django\core\handlers\base.py", line 187, in _get_response
    response = self.process_exception_by_middleware(e, request)
File "D:\Anaconda\envs\python363\lib\site-packages\django\core\handlers\base.py", line 185, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\pycharm\djangocode\project6\myApp\views.py", line 80, in upload_handle
    print(server.get_platestr(path))
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 1112, in __call__
    return self.__send(self.__name, args)
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 1452, in __request
    verbose=self.__verbose
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 1154, in request
    return self.single_request(host, handler, request_body, verbose)
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 1170, in single_request
    return self.parse_response(resp)
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 1342, in parse_response
    return u.close()
File "D:\Anaconda\envs\python363\lib\xmlrpc\client.py", line 656, in close
    raise Fault(**self._stack)
xmlrpc.client.Fault: <Fault 1: "<class 'ValueError'>:not enough values to unpack (expected 3, got 2)">
"POST /upload_handle/ HTTP/1.1" 500 96213

lixiangyv 发表于 2020-4-3 07:27:21

没学过 Django ...{:7_138:}

愿你 发表于 2020-4-11 16:01:24

lixiangyv 发表于 2020-4-3 07:27
没学过 Django ...

谢啦~

愿你 发表于 2020-4-11 16:01:55

lixiangyv 发表于 2020-4-3 07:27
没学过 Django ...

谢啦~
页: [1]
查看完整版本: python文件引入同级文件出现错误